WorldWideScience

Sample records for variable length coding

  1. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  2. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  3. Broadcasting a Common Message with Variable-Length Stop-Feedback codes

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2015-01-01

    We investigate the maximum coding rate achievable over a two-user broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder, which transmits...... itself in the absence of a square-root penalty in the asymptotic expansion of the maximum coding rate for large blocklengths, a result also known as zero dispersion. In this paper, we show that this speed-up does not necessarily occur for the broadcast channel with common message. Specifically...... continuously until it receives both stop signals. For the point-to-point case, Polyanskiy, Poor, and Verdú (2011) recently demonstrated that variable-length coding combined with stop feedback significantly increases the speed at which the maximum coding rate converges to capacity. This speed-up manifests...

  4. Construction and performance research on variable-length codes for multirate OCDMA multimedia networks

    Science.gov (United States)

    Li, Chuan-qi; Yang, Meng-jie; Luo, De-jun; Lu, Ye; Kong, Yi-pu; Zhang, Dong-chuang

    2014-09-01

    A new kind of variable-length codes with good correlation properties for the multirate asynchronous optical code division multiple access (OCDMA) multimedia networks is proposed, called non-repetition interval (NRI) codes. The NRI codes can be constructed by structuring the interval-sets with no repetition, and the code length depends on the number of users and the code weight. According to the structural characteristics of NRI codes, the formula of bit error rate (BER) is derived. Compared with other variable-length codes, the NRI codes have lower BER. A multirate OCDMA multimedia simulation system is designed and built, the longer codes are assigned to the users who need slow speed, while the shorter codes are assigned to the users who need high speed. It can be obtained by analyzing the eye diagram that the user with slower speed has lower BER, and the conclusion is the same as the actual demand in multimedia data transport.

  5. Variable-length code construction for incoherent optical CDMA systems

    Science.gov (United States)

    Lin, Jen-Yung; Jhou, Jhih-Syue; Wen, Jyh-Horng

    2007-04-01

    The purpose of this study is to investigate the multirate transmission in fiber-optic code-division multiple-access (CDMA) networks. In this article, we propose a variable-length code construction for any existing optical orthogonal code to implement a multirate optical CDMA system (called as the multirate code system). For comparison, a multirate system where the lower-rate user sends each symbol twice is implemented and is called as the repeat code system. The repetition as an error-detection code in an ARQ scheme in the repeat code system is also investigated. Moreover, a parallel approach for the optical CDMA systems, which is proposed by Marić et al., is also compared with other systems proposed in this study. Theoretical analysis shows that the bit error probability of the proposed multirate code system is smaller than other systems, especially when the number of lower-rate users is large. Moreover, if there is at least one lower-rate user in the system, the multirate code system accommodates more users than other systems when the error probability of system is set below 10 -9.

  6. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  7. Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2016-01-01

    This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...

  8. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  9. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  10. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  11. Lossless quantum data compression and variable-length coding

    International Nuclear Information System (INIS)

    Bostroem, Kim; Felbinger, Timo

    2002-01-01

    In order to compress quantum messages without loss of information it is necessary to allow the length of the encoded messages to vary. We develop a general framework for variable-length quantum messages in close analogy to the classical case and show that lossless compression is only possible if the message to be compressed is known to the sender. The lossless compression of an ensemble of messages is bounded from below by its von-Neumann entropy. We show that it is possible to reduce the number of qbits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. We give an explicit communication protocol that realizes lossless and instantaneous quantum data compression and apply it to a simple example. This protocol can be used for both online quantum communication and storage of quantum data

  12. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  13. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  14. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  15. Variable weight spectral amplitude coding for multiservice OCDMA networks

    Science.gov (United States)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  16. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  17. New extremal binary self-dual codes of lengths 64 and 66 from bicubic planar graphs

    OpenAIRE

    Kaya, Abidin

    2016-01-01

    In this work, connected cubic planar bipartite graphs and related binary self-dual codes are studied. Binary self-dual codes of length 16 are obtained by face-vertex incidence matrices of these graphs. By considering their lifts to the ring R_2 new extremal binary self-dual codes of lengths 64 are constructed as Gray images. More precisely, we construct 15 new codes of length 64. Moreover, 10 new codes of length 66 were obtained by applying a building-up construction to the binary codes. Code...

  18. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    This paper addresses the issue of data compression in distributed speech recognition on the basis of a variable frame rate and length analysis method. The method first conducts frame selection by using a posteriori signal-to-noise ratio weighted energy distance to find the right time resolution...... length for steady regions. The method is applied to scalable source coding in distributed speech recognition where the target bitrate is met by adjusting the frame rate. Speech recognition results show that the proposed approach outperforms other compression methods in terms of recognition accuracy...... for noisy speech while achieving higher compression rates....

  19. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  20. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  1. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    Science.gov (United States)

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  2. Design LDPC Codes without Cycles of Length 4 and 6

    Directory of Open Access Journals (Sweden)

    Kiseon Kim

    2008-04-01

    Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.

  3. The Classification of Complementary Information Set Codes of Lengths 14 and 16

    OpenAIRE

    Freibert, Finley

    2012-01-01

    In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\\'{e} defined a new class of rate one-half binary codes called \\emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they m...

  4. Design of variable-weight quadratic congruence code for optical CDMA

    Science.gov (United States)

    Feng, Gang; Cheng, Wen-Qing; Chen, Fu-Jun

    2015-09-01

    A variable-weight code family referred to as variable-weight quadratic congruence code (VWQCC) is constructed by algebraic transformation for incoherent synchronous optical code division multiple access (OCDMA) systems. Compared with quadratic congruence code (QCC), VWQCC doubles the code cardinality and provides the multiple code-sets with variable code-weight. Moreover, the bit-error rate (BER) performance of VWQCC is superior to those of conventional variable-weight codes by removing or padding pulses under the same chip power assumption. The experiment results show that VWQCC can be well applied to the OCDMA with quality of service (QoS) requirements.

  5. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  6. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    Science.gov (United States)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  7. Continuous-variable quantum erasure correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    We experimentally demonstrate a continuous variable quantum erasure-correcting code, which protects coherent states of light against complete erasure. The scheme encodes two coherent states into a bi-party entangled state, and the resulting 4-mode code is conveyed through 4 independent channels...

  8. Continuously variable focal length lens

    Science.gov (United States)

    Adams, Bernhard W; Chollet, Matthieu C

    2013-12-17

    A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.

  9. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    Science.gov (United States)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  10. Fixed capacity and variable member grouping assignment of orthogonal variable spreading factor code tree for code division multiple access networks

    Directory of Open Access Journals (Sweden)

    Vipin Balyan

    2014-08-01

    Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.

  11. An RNA-Seq strategy to detect the complete coding and non-coding transcriptome including full-length imprinted macro ncRNAs.

    Directory of Open Access Journals (Sweden)

    Ru Huang

    Full Text Available Imprinted macro non-protein-coding (nc RNAs are cis-repressor transcripts that silence multiple genes in at least three imprinted gene clusters in the mouse genome. Similar macro or long ncRNAs are abundant in the mammalian genome. Here we present the full coding and non-coding transcriptome of two mouse tissues: differentiated ES cells and fetal head using an optimized RNA-Seq strategy. The data produced is highly reproducible in different sequencing locations and is able to detect the full length of imprinted macro ncRNAs such as Airn and Kcnq1ot1, whose length ranges between 80-118 kb. Transcripts show a more uniform read coverage when RNA is fragmented with RNA hydrolysis compared with cDNA fragmentation by shearing. Irrespective of the fragmentation method, all coding and non-coding transcripts longer than 8 kb show a gradual loss of sequencing tags towards the 3' end. Comparisons to published RNA-Seq datasets show that the strategy presented here is more efficient in detecting known functional imprinted macro ncRNAs and also indicate that standardization of RNA preparation protocols would increase the comparability of the transcriptome between different RNA-Seq datasets.

  12. Increased length of inpatient stay and poor clinical coding: audit of patients with diabetes.

    Science.gov (United States)

    Daultrey, Harriet; Gooday, Catherine; Dhatariya, Ketan

    2011-11-01

    People with diabetes stay in hospital for longer than those without diabetes for similar conditions. Clinical coding is poor across all specialties. Inpatients with diabetes often have unrecognized foot problems. We wanted to look at the relationships between these factors. A single day audit, looking at the prevalence of diabetes in all adult inpatients. Also looking at their feet to find out how many were high-risk or had existing problems. A 998-bed university teaching hospital. All adult inpatients. (a) To see if patients with diabetes and foot problems were in hospital for longer than the national average length of stay compared with national data; (b) to see if there were people in hospital with acute foot problems who were not known to the specialist diabetic foot team; and (c) to assess the accuracy of clinical coding. We identified 110 people with diabetes. However, discharge coding data for inpatients on that day showed 119 people with diabetes. Length of stay (LOS) was substantially higher for those with diabetes compared to those without (± SD) at 22.39 (22.26) days, vs. 11.68 (6.46) (P coding was poor with some people who had been identified as having diabetes on the audit, who were not coded as such on discharge. Clinical coding - which is dependent on discharge summaries - poorly reflects diagnoses. Additionally, length of stay is significantly longer than previous estimates. The discrepancy between coding and diagnosis needs addressing by increasing the levels of awareness and education of coders and physicians. We suggest that our data be used by healthcare planners when deciding on future tariffs.

  13. Joint variable frame rate and length analysis for speech recognition under adverse conditions

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Kraljevski, Ivan

    2014-01-01

    This paper presents a method that combines variable frame length and rate analysis for speech recognition in noisy environments, together with an investigation of the effect of different frame lengths on speech recognition performance. The method adopts frame selection using an a posteriori signal......-to-noise (SNR) ratio weighted energy distance and increases the length of the selected frames, according to the number of non-selected preceding frames. It assigns a higher frame rate and a normal frame length to a rapidly changing and high SNR region of a speech signal, and a lower frame rate and an increased...... frame length to a steady or low SNR region. The speech recognition results show that the proposed variable frame rate and length method outperforms fixed frame rate and length analysis, as well as standalone variable frame rate analysis in terms of noise-robustness....

  14. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    International Nuclear Information System (INIS)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.; Razalli, Mohammad Shahrazel; Kadhim, Rasim A.

    2015-01-01

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed an investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction

  15. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    Energy Technology Data Exchange (ETDEWEB)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.; Razalli, Mohammad Shahrazel; Kadhim, Rasim A. [School of Computer and Communication Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis (Malaysia)

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed an investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.

  16. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  17. Statistical screening of input variables in a complex computer code

    International Nuclear Information System (INIS)

    Krieger, T.J.

    1982-01-01

    A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results

  18. Variable Dimension Trellis-Coded Quantization of Sinusoidal Parameters

    DEFF Research Database (Denmark)

    Larsen, Morten Holm; Christensen, Mads G.; Jensen, Søren Holdt

    2008-01-01

    In this letter, we propose joint quantization of the parameters of a set of sinusoids based on the theory of trellis-coded quantization. A particular advantage of this approach is that it allows for joint quantization of a variable number of sinusoids, which is particularly relevant in variable...

  19. Chaotic behaviour of a pendulum with variable length

    Energy Technology Data Exchange (ETDEWEB)

    Bartuccelli, M; Christiansen, P L; Muto, V; Soerensen, M P; Pedersen, N F

    1987-08-01

    The Melnikov function for the prediction of Smale horseshoe chaos is applied to a driven damped pendulum with variable length. Depending on the parameters, it is shown that this dynamical system undertakes heteroclinic bifurcations which are the source of the unstable chaotic motion. The analytical results are illustrated by new numerical simulations. Furthermore, using the averaging theorem, the stability of the subharmonics is studied.

  20. An Assessment of the Length and Variability of Mercury's Magnetotail

    Science.gov (United States)

    Milan, S. E.; Slavin, J. A.

    2011-01-01

    We employ Mariner 10 measurements of the interplanetary magnetic field in the vicinity of Mercury to estimate the rate of magnetic reconnection between the interplanetary magnetic field and the Hermean magnetosphere. We derive a time-series of the open magnetic flux in Mercury's magnetosphere. from which we can deduce the length of the magnetotail The length of the magnetotail is shown to be highly variable. with open field lines stretching between 15R(sub H) and 8S0R(sub H) downstream of the planet (median 150R(sub H)). Scaling laws allow the tail length at perihelion to be deduced from the aphelion Mariner 10 observations.

  1. Comparisons between Arabidopsis thaliana and Drosophila melanogaster in relation to Coding and Noncoding Sequence Length and Gene Expression

    Directory of Open Access Journals (Sweden)

    Rachel Caldwell

    2015-01-01

    Full Text Available There is a continuing interest in the analysis of gene architecture and gene expression to determine the relationship that may exist. Advances in high-quality sequencing technologies and large-scale resource datasets have increased the understanding of relationships and cross-referencing of expression data to the large genome data. Although a negative correlation between expression level and gene (especially transcript length has been generally accepted, there have been some conflicting results arising from the literature concerning the impacts of different regions of genes, and the underlying reason is not well understood. The research aims to apply quantile regression techniques for statistical analysis of coding and noncoding sequence length and gene expression data in the plant, Arabidopsis thaliana, and fruit fly, Drosophila melanogaster, to determine if a relationship exists and if there is any variation or similarities between these species. The quantile regression analysis found that the coding sequence length and gene expression correlations varied, and similarities emerged for the noncoding sequence length (5′ and 3′ UTRs between animal and plant species. In conclusion, the information described in this study provides the basis for further exploration into gene regulation with regard to coding and noncoding sequence length.

  2. Components of genetic variability of ear length of silage maize

    Directory of Open Access Journals (Sweden)

    Sečanski Mile

    2006-01-01

    Full Text Available The objective of this study was to evaluate following parameters of the ear length of silage maize: variability of inbred lines and their diallel hybrids, superior-parent heterosis and genetic components of variability and habitability on the basis of a diallel set. The analysis of genetic variance shows that the additive component (D was lower than the dominant (H1 and H2 genetic variances, while the frequency of dominant genes (u for this trait was greater than the frequency of recessive genes (v Furthermore, this is also confirmed by the dominant to recessive genes ratio in parental inbreeds for the ear length (Kd/Kr> 1, which is greater than unity during both investigation years. The calculated value of the average degree of dominance √H1/D is greater than unity, pointing out to superdominance in inheritance of this trait in both years of investigation, which is also confirmed by the results of Vr/Wr regression analysis of inheritance of the ear length. As a presence of the non-allelic interaction was established it is necessary to study effects of epitasis as it can have a greater significance in certain hybrids. A greater value of dominant than additive variance resulted in high broad-sense habitability for ear length in both investigation years.

  3. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  4. Detecting Scareware by Mining Variable Length Instruction Sequences

    OpenAIRE

    Shahzad, Raja Khurram; Lavesson, Niklas

    2011-01-01

    Scareware is a recent type of malicious software that may pose financial and privacy-related threats to novice users. Traditional countermeasures, such as anti-virus software, require regular updates and often lack the capability of detecting novel (unseen) instances. This paper presents a scareware detection method that is based on the application of machine learning algorithms to learn patterns in extracted variable length opcode sequences derived from instruction sequences of binary files....

  5. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  6. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  7. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  8. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  9. Design of a new SI engine intake manifold with variable length plenum

    International Nuclear Information System (INIS)

    Ceviz, M.A.; Akin, M.

    2010-01-01

    This paper investigates the effects of intake plenum length/volume on the performance characteristics of a spark-ignited engine with electronically controlled fuel injectors. Previous work was carried out mainly on the engine with carburetor producing a mixture desirable for combustion and dispatching the mixture to the intake manifold. The more stringent emission legislations have driven engine development towards concepts based on electronic-controlled fuel injection rather than the use of carburetors. In the engine with multipoint fuel injection system using electronically controlled fuel injectors has an intake manifold in which only the air flows and, the fuel is injected onto the intake valve. Since the intake manifolds transport mainly air, the supercharging effects of the variable length intake plenum will be different from carbureted engine. Engine tests have been carried out with the aim of constituting a base study to design a new variable length intake manifold plenum. Engine performance characteristics such as brake torque, brake power, thermal efficiency and specific fuel consumption were taken into consideration to evaluate the effects of the variation in the length of intake plenum. The results showed that the variation in the plenum length causes an improvement on the engine performance characteristics especially on the fuel consumption at high load and low engine speeds which are put forward the system using for urban roads. According to the test results, plenum length must be extended for low engine speeds and shortened as the engine speed increases. A system taking into account the results of the study was developed to adjust the intake plenum length.

  10. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    Science.gov (United States)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  11. Select injury-related variables are affected by stride length and foot strike style during running.

    Science.gov (United States)

    Boyer, Elizabeth R; Derrick, Timothy R

    2015-09-01

    Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables

  12. The effect of word length and other sublexical, lexical, and semantic variables on developmental reading deficits.

    Science.gov (United States)

    De Luca, Maria; Barca, Laura; Burani, Cristina; Zoccolotti, Pierluigi

    2008-12-01

    To examine the effect of word length and several sublexical, and lexico-semantic variables on the reading of Italian children with a developmental reading deficit. Previous studies indicated the role of word length in transparent orthographies. However, several factors that may interact with word length were not controlled for. Seventeen impaired and 34 skilled sixth-grade readers were presented words of different lengths, matched for initial phoneme, bigram frequency, word frequency, age of acquisition, and imageability. Participants were asked to read aloud, as quickly and as accurately as possible. Reaction times at the onset of pronunciation and mispronunciations were recorded. Impaired readers' reaction times indicated a marked effect of word length; in skilled readers, there was no length effect for short words but, rather, a monotonic increase from 6-letter words on. Regression analyses confirmed the role of word length and indicated the influence of word frequency (similar in impaired and skilled readers). No other variables predicted reading latencies. Word length differentially influenced word recognition in impaired versus skilled readers, irrespective of the action of (potentially interfering) sublexical, lexical, and semantic variables. It is proposed that the locus of the length effect is at a perceptual level of analysis. The independent influence of word frequency on the reading performance of both groups of participants indicates the sparing of lexical activation in impaired readers.

  13. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  14. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  15. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  16. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  17. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  18. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  19. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  20. Topology optimization of a flexible multibody system with variable-length bodies described by ALE–ANCF

    DEFF Research Database (Denmark)

    Sun, Jialiang; Tian, Qiang; Hu, Haiyan

    2018-01-01

    Recent years have witnessed the application of topology optimization to flexible multibody systems (FMBS) so as to enhance their dynamic performances. In this study, an explicit topology optimization approach is proposed for an FMBS with variable-length bodies via the moving morphable components...... (MMC). Using the arbitrary Lagrangian–Eulerian (ALE) formulation, the thin plate elements of the absolute nodal coordinate formulation (ANCF) are used to describe the platelike bodies with variable length. For the thin plate element of ALE–ANCF, the elastic force and additional inertial force, as well...

  1. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  2. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  3. 2D hydrodynamic simulations of a variable length gas target for density down-ramp injection of electrons into a laser wakefield accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kononenko, O., E-mail: olena.kononenko@desy.de [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Lopes, N.C.; Cole, J.M.; Kamperidis, C.; Mangles, S.P.D.; Najmudin, Z. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Osterhoff, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Poder, K. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Rusby, D.; Symes, D.R. [Central Laser Facility, STFC Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX (United Kingdom); Warwick, J. [Queens University Belfast, North Ireland (United Kingdom); Wood, J.C. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Palmer, C.A.J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)

    2016-09-01

    In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.

  4. Phenotypic and genotypic variability of disc flower corolla length and nectar content in sunflower

    Directory of Open Access Journals (Sweden)

    Joksimović Jovan

    2003-01-01

    Full Text Available The nectar content and disc flower corolla length are the two most important parameters of attractiveness to pollinators in sunflower. The phenotypic and genotypic variability of these two traits was studied in four commercially important hybrids and their parental components in a trial with three fertilizer doses over two years. The results showed that, looking at individual genotypes, the variability of disc flower corolla length was affected the most by year (85.38-97.46%. As the study years were extremely different, the phenotypic variance of the hybrids and parental components was calculated for each year separately. In such conditions, looking at all of the crossing combinations, the largest contribution to phenotypic variance of the corolla length was that of genotype: 57.27-61.11% (NS-H-45 64.51-84.84% (Velja; 96.74-97.20% (NS-H-702 and 13.92-73.17% (NS-H-111. A similar situation was observed for the phenotypic variability of nectar content, where genotype also had the largest influence, namely 39.77-48.25% in NS-H-45; 39.06-42.51% in Velja; 31.97-72.36% in NS-H-702; and 62.13-94.96% in NS-H-111.

  5. P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length

    DEFF Research Database (Denmark)

    Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny

    2014-01-01

    Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...

  6. Variability and trends in dry day frequency and dry event length in the southwestern United States

    Science.gov (United States)

    McCabe, Gregory J.; Legates, David R.; Lins, Harry F.

    2010-01-01

    Daily precipitation from 22 National Weather Service first-order weather stations in the southwestern United States for water years 1951 through 2006 are used to examine variability and trends in the frequency of dry days and dry event length. Dry events with minimum thresholds of 10 and 20 consecutive days of precipitation with less than 2.54 mm are analyzed. For water years and cool seasons (October through March), most sites indicate negative trends in dry event length (i.e., dry event durations are becoming shorter). For the warm season (April through September), most sites also indicate negative trends; however, more sites indicate positive trends in dry event length for the warm season than for water years or cool seasons. The larger number of sites indicating positive trends in dry event length during the warm season is due to a series of dry warm seasons near the end of the 20th century and the beginning of the 21st century. Overall, a large portion of the variability in dry event length is attributable to variability of the El Niño–Southern Oscillation, especially for water years and cool seasons. Our results are consistent with analyses of trends in discharge for sites in the southwestern United States, an increased frequency in El Niño events, and positive trends in precipitation in the southwestern United States.

  7. Performance and emission characteristics of LPG powered four stroke SI engine under variable stroke length and compression ratio

    International Nuclear Information System (INIS)

    Ozcan, Hakan; Yamin, Jehad A.A.

    2008-01-01

    A computer simulation of a variable stroke length, LPG fuelled, four stroke, single cylinder, water cooled spark ignition engine was done. The engine capacity was varied by varying the stroke length of the engine, which also changed its compression ratio. The simulation model developed was verified with experimental results from the literature for both constant and variable stroke engines. The performance of the engine was simulated at each stroke length/compression ratio combination. The simulation results clearly indicate the advantages and utility of variable stroke engines in fuel economy and power issues. Using the variable stroke technique has significantly improved the engine's performance and emission characteristics within the range studied. The brake torque and power have registered an increase of about 7-54% at low speed and 7-57% at high speed relative to the original engine design and for all stroke lengths and engine speeds studied. The brake specific fuel consumption has registered variations from a reduction of about 6% to an increase of about 3% at low speed and from a reduction of about 6% to an increase of about 8% at high speed relative to the original engine design and for all stroke lengths and engine speeds studied. On the other hand, an increase of pollutants of about 0.65-2% occurred at low speed. Larger stroke lengths resulted in a reduction of the pollutants level of about 1.5% at higher speeds. At lower stroke lengths, on the other hand, an increase of about 2% occurred. Larger stroke lengths resulted in increased exhaust temperature and, hence, make the exhaust valve work under high temperature

  8. Research On Variable-Length Transfer Delay and Delayed Signal Cancellation Based PLLs

    DEFF Research Database (Denmark)

    Golestan, Saeed; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez

    2018-01-01

    large frequency drifts are anticipated and a high accuracy is required. To the best of authors' knowledge, the small-signal modeling of a variable-length delay-based PLL has not yet been conducted. The main aim of this paper is to cover this gap. The tuning procedure and analysis of these PLLs...

  9. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  10. An integrated PCR colony hybridization approach to screen cDNA libraries for full-length coding sequences.

    Science.gov (United States)

    Pollier, Jacob; González-Guzmán, Miguel; Ardiles-Diaz, Wilson; Geelen, Danny; Goossens, Alain

    2011-01-01

    cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) is a commonly used technique for genome-wide expression analysis that does not require prior sequence knowledge. Typically, quantitative expression data and sequence information are obtained for a large number of differentially expressed gene tags. However, most of the gene tags do not correspond to full-length (FL) coding sequences, which is a prerequisite for subsequent functional analysis. A medium-throughput screening strategy, based on integration of polymerase chain reaction (PCR) and colony hybridization, was developed that allows in parallel screening of a cDNA library for FL clones corresponding to incomplete cDNAs. The method was applied to screen for the FL open reading frames of a selection of 163 cDNA-AFLP tags from three different medicinal plants, leading to the identification of 109 (67%) FL clones. Furthermore, the protocol allows for the use of multiple probes in a single hybridization event, thus significantly increasing the throughput when screening for rare transcripts. The presented strategy offers an efficient method for the conversion of incomplete expressed sequence tags (ESTs), such as cDNA-AFLP tags, to FL-coding sequences.

  11. Variability in interhospital trauma data coding and scoring: A challenge to the accuracy of aggregated trauma registries.

    Science.gov (United States)

    Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven

    2015-09-01

    Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the

  12. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  13. Importance of Viral Sequence Length and Number of Variable and Informative Sites in Analysis of HIV Clustering.

    Science.gov (United States)

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor; Essex, M

    2015-05-01

    To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice.

  14. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  15. Construction and performance analysis of variable-weight optical orthogonal codes for asynchronous OCDMA systems

    Science.gov (United States)

    Li, Chuan-qi; Yang, Meng-jie; Zhang, Xiu-rong; Chen, Mei-juan; He, Dong-dong; Fan, Qing-bin

    2014-07-01

    A construction scheme of variable-weight optical orthogonal codes (VW-OOCs) for asynchronous optical code division multiple access (OCDMA) system is proposed. According to the actual situation, the code family can be obtained by programming in Matlab with the given code weight and corresponding capacity. The formula of bit error rate (BER) is derived by taking account of the effects of shot noise, avalanche photodiode (APD) bulk, thermal noise and surface leakage currents. The OCDMA system with the VW-OOCs is designed and improved. The study shows that the VW-OOCs have excellent performance of BER. Despite of coming from the same code family or not, the codes with larger weight have lower BER compared with the other codes in the same conditions. By taking simulation, the conclusion is consistent with the analysis of BER in theory. And the ideal eye diagrams are obtained by the optical hard limiter.

  16. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  17. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A

  18. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  19. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  20. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  1. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  2. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    Science.gov (United States)

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  3. Do climate variables and human density affect Achatina fulica (Bowditch) (Gastropoda: Pulmonata) shell length, total weight and condition factor?

    Science.gov (United States)

    Albuquerque, F S; Peso-Aguiar, M C; Assunção-Albuquerque, M J T; Gálvez, L

    2009-08-01

    The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm). The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.

  4. Do climate variables and human density affect Achatina fulica (Bowditch (Gastropoda: Pulmonata shell length, total weight and condition factor?

    Directory of Open Access Journals (Sweden)

    FS. Albuquerque

    Full Text Available The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm. The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.

  5. String matching with variable length gaps

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vildhøj, Hjalte Wedel

    2012-01-01

    primitive in computational biology applications. Let m and n be the lengths of P and T, respectively, and let k be the number of strings in P. We present a new algorithm achieving time O(nlogk+m+α) and space O(m+A), where A is the sum of the lower bounds of the lengths of the gaps in P and α is the total...... number of occurrences of the strings in P within T. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of m, n, k, A, and α. Our algorithm...

  6. A 7MeV S-Band 2998MHz Variable Pulse Length Linear Accelerator System

    CERN Document Server

    Hernandez, Michael; Mishin, Andrey V; Saverskiy, Aleksandr J; Skowbo, Dave; Smith, Richard

    2005-01-01

    American Science and Engineering High Energy Systems Division (AS&E HESD) has designed and commissioned a variable pulse length 7 MeV electron accelerator system. The system is capable of delivering a 7 MeV electron beam with a pulse length of 10 nS FWHM and a peak current of 1 ampere. The system can also produce electron pulses with lengths of 20, 50, 100, 200, 400 nS and 3 uS FWHM with corresponding lower peak currents. The accelerator system consists of a gridded electron gun, focusing coil, an electrostatic deflector system, Helmholtz coils, a standing wave side coupled S-band linac, a 2.6 MW peak power magnetron, an RF circulator, a fast toroid, vacuum system and a PLC/PC control system. The system has been operated at repetition rates up to 250pps. The design, simulations and experimental results from the accelerator system are presented in this paper.

  7. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  8. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  9. Analysis of the land surface heterogeneity and its impact on atmospheric variables and the aerodynamic and thermodynamic roughness lengths

    NARCIS (Netherlands)

    Ma, Y.M.; Menenti, M.; Feddes, R.A.; Wang, J.M.

    2008-01-01

    The land surface heterogeneity has a very significant impact on atmospheric variables (air temperature T-a, wind speed u, and humidity q), the aerodynamic roughness length z(0m), thermodynamic roughness length z(0h), and the excess resistance to heat transfer kB(-1). First, in this study the land

  10. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  11. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  12. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  13. Influence of Coding Variability in APP-Aβ Metabolism Genes in Sporadic Alzheimer's Disease.

    Directory of Open Access Journals (Sweden)

    Celeste Sassi

    Full Text Available The cerebral deposition of Aβ42, a neurotoxic proteolytic derivate of amyloid precursor protein (APP, is a central event in Alzheimer's disease (AD(Amyloid hypothesis. Given the key role of APP-Aβ metabolism in AD pathogenesis, we selected 29 genes involved in APP processing, Aβ degradation and clearance. We then used exome and genome sequencing to investigate the single independent (single-variant association test and cumulative (gene-based association test effect of coding variants in these genes as potential susceptibility factors for AD, in a cohort composed of 332 sporadic and mainly late-onset AD cases and 676 elderly controls from North America and the UK. Our study shows that common coding variability in these genes does not play a major role for the disease development. In the single-variant association analysis, the main hits, none of which statistically significant after multiple testing correction (1.9e-4coding variants (0.009%coding variability in APP-Aβ genes is not a critical factor for AD development and 2 Aβ degradation and clearance, rather than Aβ production, may play a key role in the etiology of sporadic AD.

  14. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    Science.gov (United States)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  15. Reliability and short-term intra-individual variability of telomere length measurement using monochrome multiplexing quantitative PCR.

    Directory of Open Access Journals (Sweden)

    Sangmi Kim

    Full Text Available Studies examining the association between telomere length and cancer risk have often relied on measurement of telomere length from a single blood draw using a real-time PCR technique. We examined the reliability of telomere length measurement using sequential samples collected over a 9-month period.Relative telomere length in peripheral blood was estimated using a single tube monochrome multiplex quantitative PCR assay in blood DNA samples from 27 non-pregnant adult women (aged 35 to 74 years collected in 7 visits over a 9-month period. A linear mixed model was used to estimate the components of variance for telomere length measurements attributed to variation among women and variation between time points within women. Mean telomere length measurement at any single visit was not significantly different from the average of 7 visits. Plates had a significant systematic influence on telomere length measurements, although measurements between different plates were highly correlated. After controlling for plate effects, 64% of the remaining variance was estimated to be accounted for by variance due to subject. Variance explained by time of visit within a subject was minor, contributing 5% of the remaining variance.Our data demonstrate good short-term reliability of telomere length measurement using blood from a single draw. However, the existence of technical variability, particularly plate effects, reinforces the need for technical replicates and balancing of case and control samples across plates.

  16. Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques.

    Science.gov (United States)

    Fitch, W T

    1997-08-01

    Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies ("formants") for the same animals. A new acoustic variable is proposed, "formant dispersion," which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.

  17. In vitro cytotoxicity of Manville Code 100 glass fibers: Effect of fiber length on human alveolar macrophages

    Directory of Open Access Journals (Sweden)

    Jones William

    2006-03-01

    Full Text Available Abstract Background Synthetic vitreous fibers (SVFs are inorganic noncrystalline materials widely used in residential and industrial settings for insulation, filtration, and reinforcement purposes. SVFs conventionally include three major categories: fibrous glass, rock/slag/stone (mineral wool, and ceramic fibers. Previous in vitro studies from our laboratory demonstrated length-dependent cytotoxic effects of glass fibers on rat alveolar macrophages which were possibly associated with incomplete phagocytosis of fibers ≥ 17 μm in length. The purpose of this study was to examine the influence of fiber length on primary human alveolar macrophages, which are larger in diameter than rat macrophages, using length-classified Manville Code 100 glass fibers (8, 10, 16, and 20 μm. It was hypothesized that complete engulfment of fibers by human alveolar macrophages could decrease fiber cytotoxicity; i.e. shorter fibers that can be completely engulfed might not be as cytotoxic as longer fibers. Human alveolar macrophages, obtained by segmental bronchoalveolar lavage of healthy, non-smoking volunteers, were treated with three different concentrations (determined by fiber number of the sized fibers in vitro. Cytotoxicity was assessed by monitoring cytosolic lactate dehydrogenase release and loss of function as indicated by a decrease in zymosan-stimulated chemiluminescence. Results Microscopic analysis indicated that human alveolar macrophages completely engulfed glass fibers of the 20 μm length. All fiber length fractions tested exhibited equal cytotoxicity on a per fiber basis, i.e. increasing lactate dehydrogenase and decreasing chemiluminescence in the same concentration-dependent fashion. Conclusion The data suggest that due to the larger diameter of human alveolar macrophages, compared to rat alveolar macrophages, complete phagocytosis of longer fibers can occur with the human cells. Neither incomplete phagocytosis nor length-dependent toxicity was

  18. Pseudo-polyprotein translated from the full-length ORF1 of capillovirus is important for pathogenicity, but a truncated ORF1 protein without variable and CP regions is sufficient for replication.

    Science.gov (United States)

    Hirata, Hisae; Yamaji, Yasuyuki; Komatsu, Ken; Kagiwada, Satoshi; Oshima, Kenro; Okano, Yukari; Takahashi, Shuichiro; Ugaki, Masashi; Namba, Shigetou

    2010-09-01

    The first open-reading frame (ORF) of the genus Capillovirus encodes an apparently chimeric polyprotein containing conserved regions for replicase (Rep) and coat protein (CP), while other viruses in the family Flexiviridae have separate ORFs encoding these proteins. To investigate the role of the full-length ORF1 polyprotein of capillovirus, we generated truncation mutants of ORF1 of apple stem grooving virus by inserting a termination codon into the variable region located between the putative Rep- and CP-coding regions. These mutants were capable of systemic infection, although their pathogenicity was attenuated. In vitro translation of ORF1 produced both the full-length polyprotein and the smaller Rep protein. The results of in vivo reporter assays suggested that the mechanism of this early termination is a ribosomal -1 frame-shift occurring downstream from the conserved Rep domains. The mechanism of capillovirus gene expression and the very close evolutionary relationship between the genera Capillovirus and Trichovirus are discussed. Copyright (c) 2010. Published by Elsevier B.V.

  19. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  20. Coding completeness and quality of relative survival-related variables in the National Program of Cancer Registries Cancer Surveillance System, 1995-2008.

    Science.gov (United States)

    Wilson, Reda J; O'Neil, M E; Ntekop, E; Zhang, Kevin; Ren, Y

    2014-01-01

    Calculating accurate estimates of cancer survival is important for various analyses of cancer patient care and prognosis. Current US survival rates are estimated based on data from the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End RESULTS (SEER) program, covering approximately 28 percent of the US population. The National Program of Cancer Registries (NPCR) covers about 96 percent of the US population. Using a population-based database with greater US population coverage to calculate survival rates at the national, state, and regional levels can further enhance the effective monitoring of cancer patient care and prognosis in the United States. The first step is to establish the coding completeness and coding quality of the NPCR data needed for calculating survival rates and conducting related validation analyses. Using data from the NPCR-Cancer Surveillance System (CSS) from 1995 through 2008, we assessed coding completeness and quality on 26 data elements that are needed to calculate cancer relative survival estimates and conduct related analyses. Data elements evaluated consisted of demographic, follow-up, prognostic, and cancer identification variables. Analyses were performed showing trends of these variables by diagnostic year, state of residence at diagnosis, and cancer site. Mean overall percent coding completeness by each NPCR central cancer registry averaged across all data elements and diagnosis years ranged from 92.3 percent to 100 percent. RESULTS showing the mean percent coding completeness for the relative survival-related variables in NPCR data are presented. All data elements but 1 have a mean coding completeness greater than 90 percent as was the mean completeness by data item group type. Statistically significant differences in coding completeness were found in the ICD revision number, cause of death, vital status, and date of last contact variables when comparing diagnosis years. The majority of data items had a coding

  1. A Study of Nonlinear Variable Viscosity in Finite-Length Tube with Peristalsis

    Directory of Open Access Journals (Sweden)

    Y. Abd Elmaboud

    2014-01-01

    Full Text Available Peristaltic motion of an incompressible Newtonian fluid with variable viscosity induced by periodic sinusoidal traveling wave propagating along the walls of a finite-length tube has been investigated. A perturbation method of solution is sought. The viscosity parameter α (α << 1 is chosen as a perturbation parameter and the governing equations are developed up to the first-order in the viscosity parameter (α. The analytical solution has been derived for the radial velocity at the tube wall, the axial pressure gradient across the length of the tube, and the wall shear stress under the assumption of low Reynolds number and long wavelength approximation. The impacts of physical parameters such as the viscosity and the parameter determining the shape of the constriction on the pressure distribution and on the wall shear stress for integral and non-integral number of waves are illustrated. The main conclusion that can be drawn out of this study is that the peaks of pressure fluctuate with time and attain different values with non-integral numbers of peristaltic waves. The considered problem is very applicable in study of biological flow and industrial flow.

  2. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  3. Performance Analysis of CRC Codes for Systematic and Nonsystematic Polar Codes with List Decoding

    Directory of Open Access Journals (Sweden)

    Takumi Murata

    2018-01-01

    Full Text Available Successive cancellation list (SCL decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC decoding, provided that proper cyclic redundancy-check (CRC codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.

  4. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    Science.gov (United States)

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  5. Estimating the hemodynamic influence of variable main body-to-iliac limb length ratios in aortic endografts.

    Science.gov (United States)

    Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S

    2018-02-01

    We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable

  6. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  7. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  8. New MDS or near MDS self-dual codes over finite fields

    OpenAIRE

    Tong, Hongxi; Wang, Xiaoqing

    2016-01-01

    The study of MDS self-dual codes has attracted lots of attention in recent years. There are many papers on determining existence of $q-$ary MDS self-dual codes for various lengths. There are not existence of $q-$ary MDS self-dual codes of some lengths, even these lengths $< q$. We generalize MDS Euclidean self-dual codes to near MDS Euclidean self-dual codes and near MDS isodual codes. And we obtain many new near MDS isodual codes from extended negacyclic duadic codes and we obtain many new M...

  9. 1×4 Optical packet switching of variable length 640 Gbit/s data packets using in-band optical notch-filter labeling

    DEFF Research Database (Denmark)

    Medhin, Ashenafi Kiros; Kamchevska, Valerija; Galili, Michael

    2014-01-01

    We experimentally perform 1×4 optical packet switching of variable length 640 Gbit/s OTDM data packets using in-band notch-filter labeling with only 2.7-dB penalty. Up to 8 notches are employed to demonstrate scalability of the labeling scheme to 1×256 switching operation.......We experimentally perform 1×4 optical packet switching of variable length 640 Gbit/s OTDM data packets using in-band notch-filter labeling with only 2.7-dB penalty. Up to 8 notches are employed to demonstrate scalability of the labeling scheme to 1×256 switching operation....

  10. Chord length distribution for a compound capsule

    International Nuclear Information System (INIS)

    Pitřík, Pavel

    2017-01-01

    Chord length distribution is a factor important in the calculation of ionisation chamber responses. This article describes Monte Carlo calculations of the chord length distribution for a non-convex compound capsule. A Monte Carlo code was set up for generation of random chords and calculation of their lengths based on the input number of generations and cavity dimensions. The code was written in JavaScript and can be executed in the majority of HTML viewers. The plot of occurrence of cords of different lengths has 3 peaks. It was found that the compound capsule cavity cannot be simply replaced with a spherical cavity of a triangular design. Furthermore, the compound capsule cavity is directionally dependent, which must be taken into account in calculations involving non-isotropic fields of primary particles in the beam, unless equilibrium of the secondary charged particles is attained. (orig.)

  11. How mechanical context and feedback jointly determine the use of mechanical variables in length perception by dynamic touch

    NARCIS (Netherlands)

    Menger, Rudmer; Withagen, Rob

    Earlier studies have revealed that both mechanical context and feedback determine what mechanical invariant is used to perceive length by dynamic touch. In the present article, the authors examined how these two factors jointly constrain the informational variable that is relied upon. Participants

  12. How mechanical context and feedback jointly determine the use of mechanical variables in length perception by dynamic touch

    NARCIS (Netherlands)

    Menger, Rudmer; Withagen, Rob

    2009-01-01

    Earlier studies have revealed that both mechanical context and feedback determine what mechanical invariant is used to perceive length by dynamic touch. In the present article, the authors examined how these two factors jointly constrain the informational variable that is relied upon. Participants

  13. Coded communications with nonideal interleaving

    Science.gov (United States)

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  14. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  15. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  16. Using Variable-Length Aligned Fragment Pairs and an Improved Transition Function for Flexible Protein Structure Alignment.

    Science.gov (United States)

    Cao, Hu; Lu, Yonggang

    2017-01-01

    With the rapid growth of known protein 3D structures in number, how to efficiently compare protein structures becomes an essential and challenging problem in computational structural biology. At present, many protein structure alignment methods have been developed. Among all these methods, flexible structure alignment methods are shown to be superior to rigid structure alignment methods in identifying structure similarities between proteins, which have gone through conformational changes. It is also found that the methods based on aligned fragment pairs (AFPs) have a special advantage over other approaches in balancing global structure similarities and local structure similarities. Accordingly, we propose a new flexible protein structure alignment method based on variable-length AFPs. Compared with other methods, the proposed method possesses three main advantages. First, it is based on variable-length AFPs. The length of each AFP is separately determined to maximally represent a local similar structure fragment, which reduces the number of AFPs. Second, it uses local coordinate systems, which simplify the computation at each step of the expansion of AFPs during the AFP identification. Third, it decreases the number of twists by rewarding the situation where nonconsecutive AFPs share the same transformation in the alignment, which is realized by dynamic programming with an improved transition function. The experimental data show that compared with FlexProt, FATCAT, and FlexSnap, the proposed method can achieve comparable results by introducing fewer twists. Meanwhile, it can generate results similar to those of the FATCAT method in much less running time due to the reduced number of AFPs.

  17. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  18. Length and GC content variability of introns among teleostean genomes in the light of the metabolic rate hypothesis.

    Science.gov (United States)

    Chaurasia, Ankita; Tarallo, Andrea; Bernà, Luisa; Yagi, Mitsuharu; Agnisola, Claudio; D'Onofrio, Giuseppe

    2014-01-01

    A comparative analysis of five teleostean genomes, namely zebrafish, medaka, three-spine stickleback, fugu and pufferfish was performed with the aim to highlight the nature of the forces driving both length and base composition of introns (i.e., bpi and GCi). An inter-genome approach using orthologous intronic sequences was carried out, analyzing independently both variables in pairwise comparisons. An average length shortening of introns was observed at increasing average GCi values. The result was not affected by masking transposable and repetitive elements harbored in the intronic sequences. The routine metabolic rate (mass specific temperature-corrected using the Boltzmann's factor) was measured for each species. A significant correlation held between average differences of metabolic rate, length and GC content, while environmental temperature of fish habitat was not correlated with bpi and GCi. Analyzing the concomitant effect of both variables, i.e., bpi and GCi, at increasing genomic GC content, a decrease of bpi and an increase of GCi was observed for the significant majority of the intronic sequences (from ∼ 40% to ∼ 90%, in each pairwise comparison). The opposite event, concomitant increase of bpi and decrease of GCi, was counter selected (from hypothesis that the metabolic rate plays a key role in shaping genome architecture and evolution of vertebrate genomes.

  19. TMRBAR power balance code for tandem mirror reactors

    International Nuclear Information System (INIS)

    Blackkfield, D.T.; Campbell, R.; Fenstermacher, M.; Bulmer, R.; Perkins, L.; Peng, Y.K.M.; Reid, R.L.; Wu, K.F.

    1984-01-01

    A revised version of the tandem mirror multi-point code TMRBAR developed at LLNL has been used to examine various reactor designs using MARS-like ''c'' coils. We solve 14 to 16 non-linear equations to obtain the densities, temperatures, plasma potential and magnetic field on axis at the cardinal points. Since ICRH, ECRH, and neutral beams may be used to stabilize the central cell, various combinations of rf and neutral beam powers may satisfy the physics. To select a desired set of physics parameters, we use nonlinear optimization techniques. Whit these routines, we minimize or maximize a physics variable subject to the physics constraints being satisfied. For example, for a given fusion power we may find the minimum length needed to have an ignited central cell or the maximum fusion Q. Finally, we have coupled this physics model to the LLNL magnetics-MHD code. This code runs the EFFI magnetic field generator and uses TEBASCO to calculate 1-D MHD equilibria and stability

  20. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    Science.gov (United States)

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  1. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  2. FODA/IBEA satellite access scheme for MIXED traffic at variable bit and coding rates system description

    OpenAIRE

    Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco

    1992-01-01

    This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...

  3. Investigating the Effect of Recruitment Variability on Length-Based Recruitment Indices for Antarctic Krill Using an Individual-Based Population Dynamics Model

    Science.gov (United States)

    Thanassekos, Stéphane; Cox, Martin J.; Reid, Keith

    2014-01-01

    Antarctic krill (Euphausia superba; herein krill) is monitored as part of an on-going fisheries observer program that collects length-frequency data. A krill feedback management programme is currently being developed, and as part of this development, the utility of data-derived indices describing population level processes is being assessed. To date, however, little work has been carried out on the selection of optimum recruitment indices and it has not been possible to assess the performance of length-based recruitment indices across a range of recruitment variability. Neither has there been an assessment of uncertainty in the relationship between an index and the actual level of recruitment. Thus, until now, it has not been possible to take into account recruitment index uncertainty in krill stock management or when investigating relationships between recruitment and environmental drivers. Using length-frequency samples from a simulated population – where recruitment is known – the performance of six potential length-based recruitment indices is assessed, by exploring the index-to-recruitment relationship under increasing levels of recruitment variability (from ±10% to ±100% around a mean annual recruitment). The annual minimum of the proportion of individuals smaller than 40 mm (F40 min, %) was selected because it had the most robust index-to-recruitment relationship across differing levels of recruitment variability. The relationship was curvilinear and best described by a power law. Model uncertainty was described using the 95% prediction intervals, which were used to calculate coverage probabilities and assess model performance. Despite being the optimum recruitment index, the performance of F40 min degraded under high (>50%) recruitment variability. Due to the persistence of cohorts in the population over several years, the inclusion of F40 min values from preceding years in the relationship used to estimate recruitment in a given year improved its

  4. Optimizing x-ray mirror thermal performance using variable length cooling for second generation FELs

    Science.gov (United States)

    Hardin, Corey L.; Srinivasan, Venkat N.; Amores, Lope; Kelez, Nicholas M.; Morton, Daniel S.; Stefan, Peter M.; Nicolas, Josep; Zhang, Lin; Cocco, Daniele

    2016-09-01

    The success of the LCLS led to an interest across a number of disciplines in the scientific community including physics, chemistry, biology, and material science. Fueled by this success, SLAC National Accelerator Laboratory is developing a new high repetition rate free electron laser, LCLS-II, a superconducting linear accelerator capable of a repetition rate up to 1 MHz. Undulators will be optimized for 200 to 1300 eV soft X-rays, and for 1000 to 5000 eV hard X-rays. To absorb spontaneous radiation, higher harmonic energies and deflect the x-ray beam to various end stations, the transport and diagnostics system includes grazing incidence plane mirrors on both the soft and Hard X-ray beamline. To deliver the FEL beam with minimal power loss and wavefront distortion, we need mirrors of height errors below 1nm rms in operational conditions. We need to mitigate the thermal load effects due to the high repetition rate. The absorbed thermal profile is highly dependent on the beam divergence, and this is a function of the photon energy. To address this complexity, we developed a mirror cradle with variable length cooling and first order curve correction. Mirror figure error is minimized using variable length water-cooling through a gallium-indium eutectic bath. Curve correction is achieved with an off-axis bender that will be described in details. We present the design features, mechanical analysis and results from optical and mechanical tests of a prototype assembly, with particular regards to the figure sensitivity to bender corrections.

  5. Reconciliation of international administrative coding systems for comparison of colorectal surgery outcome.

    Science.gov (United States)

    Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O

    2014-07-01

    Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  6. On the equivalence of cyclic and quasi-cyclic codes over finite fields

    Directory of Open Access Journals (Sweden)

    Kenza Guenda

    2017-07-01

    Full Text Available This paper studies the equivalence problem for cyclic codes of length $p^r$ and quasi-cyclic codes of length $p^rl$. In particular, we generalize the results of Huffman, Job, and Pless (J. Combin. Theory. A, 62, 183--215, 1993, who considered the special case $p^2$. This is achieved by explicitly giving the permutations by which two cyclic codes of prime power length are equivalent. This allows us to obtain an algorithm which solves the problem of equivalency for cyclic codes of length $p^r$ in polynomial time. Further, we characterize the set by which two quasi-cyclic codes of length $p^rl$ can be equivalent, and prove that the affine group is one of its subsets.

  7. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra.......A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF(2^{3}). In the sequence there is, for instance, a code of length 21 and dimension10with minimal distance9, and a code of length21and dimension16with minimal...... distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre...

  8. Length and GC content variability of introns among teleostean genomes in the light of the metabolic rate hypothesis.

    Directory of Open Access Journals (Sweden)

    Ankita Chaurasia

    Full Text Available A comparative analysis of five teleostean genomes, namely zebrafish, medaka, three-spine stickleback, fugu and pufferfish was performed with the aim to highlight the nature of the forces driving both length and base composition of introns (i.e., bpi and GCi. An inter-genome approach using orthologous intronic sequences was carried out, analyzing independently both variables in pairwise comparisons. An average length shortening of introns was observed at increasing average GCi values. The result was not affected by masking transposable and repetitive elements harbored in the intronic sequences. The routine metabolic rate (mass specific temperature-corrected using the Boltzmann's factor was measured for each species. A significant correlation held between average differences of metabolic rate, length and GC content, while environmental temperature of fish habitat was not correlated with bpi and GCi. Analyzing the concomitant effect of both variables, i.e., bpi and GCi, at increasing genomic GC content, a decrease of bpi and an increase of GCi was observed for the significant majority of the intronic sequences (from ∼ 40% to ∼ 90%, in each pairwise comparison. The opposite event, concomitant increase of bpi and decrease of GCi, was counter selected (from <1% to ∼ 10%, in each pairwise comparison. The results further support the hypothesis that the metabolic rate plays a key role in shaping genome architecture and evolution of vertebrate genomes.

  9. User's manual for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the User's Manual for the TMAD code system, which includes the TMAD code and the LIBMAKR code. The TMAD code was commissioned to make it easier to interpret moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, also can be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code then will interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data

  10. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  11. Isolation and characterization of full-length cDNA clones coding for cholinesterase from fetal human tissues

    International Nuclear Information System (INIS)

    Prody, C.A.; Zevin-Sonkin, D.; Gnatt, A.; Goldberg, O.; Soreq, H.

    1987-01-01

    To study the primary structure and regulation of human cholinesterases, oligodeoxynucleotide probes were prepared according to a consensus peptide sequence present in the active site of both human serum pseudocholinesterase and Torpedo electric organ true acetylcholinesterase. Using these probes, the authors isolated several cDNA clones from λgt10 libraries of fetal brain and liver origins. These include 2.4-kilobase cDNA clones that code for a polypeptide containing a putative signal peptide and the N-terminal, active site, and C-terminal peptides of human BtChoEase, suggesting that they code either for BtChoEase itself or for a very similar but distinct fetal form of cholinesterase. In RNA blots of poly(A) + RNA from the cholinesterase-producing fetal brain and liver, these cDNAs hybridized with a single 2.5-kilobase band. Blot hybridization to human genomic DNA revealed that these fetal BtChoEase cDNA clones hybridize with DNA fragments of the total length of 17.5 kilobases, and signal intensities indicated that these sequences are not present in many copies. Both the cDNA-encoded protein and its nucleotide sequence display striking homology to parallel sequences published for Torpedo AcChoEase. These finding demonstrate extensive homologies between the fetal BtChoEase encoded by these clones and other cholinesterases of various forms and species

  12. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  13. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  14. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  15. Analysis of visual coding variables on CRT generated displays

    International Nuclear Information System (INIS)

    Blackman, H.S.; Gilmore, W.E.

    1985-01-01

    Cathode ray tube generated safety parameter display systems in a nuclear power plant control room situation have been found to be improved in effectiveness when color coding is employed. Research has indicated strong support for graphic coding techniques particularly in redundant coding schemes. In addition, findings on pictographs, as applied in coding schemes, indicate the need for careful application and for further research in the development of a standardized set of symbols

  16. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  17. Highly variable aerodynamic roughness length (z0) for a hummocky debris-covered glacier

    Science.gov (United States)

    Miles, Evan S.; Steiner, Jakob F.; Brun, Fanny

    2017-08-01

    The aerodynamic roughness length (z0) is an essential parameter in surface energy balance studies, but few literature values exist for debris-covered glaciers. We use microtopographic and aerodynamic methods to assess the spatial variability of z0 for Lirung Glacier, Nepal. We apply structure from motion to produce digital elevation models for three nested domains: five 1 m2 plots, a 21,300 m2 surface depression, and the lower 550,000 m2 of the debris-mantled tongue. Wind and temperature sensor towers were installed in the vicinity of the plots within the surface depression in October 2014. We calculate z0 according to a variety of transect-based microtopographic parameterizations for each plot, then develop a grid version of the algorithms by aggregating data from all transects. This grid approach is applied to the surface depression digital elevation model to characterize z0 spatial variability. The algorithms reproduce the same variability among transects and plots, but z0 estimates vary by an order of magnitude between algorithms. Across the study depression, results from different algorithms are strongly correlated. Using Monin-Obukov similarity theory, we derive z0 values from the meteorological data. Using different stability criteria, we derive median values of z0 between 0.03 m and 0.05 m, but with considerable uncertainty due to the glacier's complex topography. Considering estimates from these algorithms, results suggest that z0 varies across Lirung Glacier between ˜0.005 m (gravels) to ˜0.5 m (boulders). Future efforts should assess the importance of such variable z0 values in a distributed energy balance model.

  18. Achievable Performance of Zero-Delay Variable-Rate Coding in Rate-Constrained Networked Control Systems with Channel Delay

    DEFF Research Database (Denmark)

    Barforooshan, Mohsen; Østergaard, Jan; Stavrou, Fotios

    2017-01-01

    This paper presents an upper bound on the minimum data rate required to achieve a prescribed closed-loop performance level in networked control systems (NCSs). The considered feedback loop includes a linear time-invariant (LTI) plant with single measurement output and single control input. Moreover......, in this NCS, a causal but otherwise unconstrained feedback system carries out zero-delay variable-rate coding, and control. Between the encoder and decoder, data is exchanged over a rate-limited noiseless digital channel with a known constant time delay. Here we propose a linear source-coding scheme...

  19. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  20. FEMAXI-III, a computer code for fuel rod performance analysis

    International Nuclear Information System (INIS)

    Ito, K.; Iwano, Y.; Ichikawa, M.; Okubo, T.

    1983-01-01

    This paper presents a method of fuel rod thermal-mechanical performance analysis used in the FEMAXI-III code. The code incorporates the models describing thermal-mechanical processes such as pellet-cladding thermal expansion, pellet irradiation swelling, densification, relocation and fission gas release as they affect pellet-cladding gap thermal conductance. The code performs the thermal behavior analysis of a full-length fuel rod within the framework of one-dimensional multi-zone modeling. The mechanical effects including ridge deformation is rigorously analyzed by applying the axisymmetric finite element method. The finite element geometrical model is confined to a half-pellet-height region with the assumption that pellet-pellet interaction is symmetrical. The 8-node quadratic isoparametric ring elements are adopted for obtaining accurate finite element solutions. The Newton-Raphson iteration with an implicit algorithm is applied to perform the analysis of non-linear material behaviors accurately and stably. The pellet-cladding interaction mechanism is exactly treated using the nodal continuity conditions. The code is applicable to the thermal-mechanical analysis of water reactor fuel rods experiencing variable power histories. (orig.)

  1. Buccal telomere length and its associations with cortisol, heart rate variability, heart rate, and blood pressure responses to an acute social evaluative stressor in college students.

    Science.gov (United States)

    Woody, Alex; Hamilton, Katrina; Livitz, Irina E; Figueroa, Wilson S; Zoccola, Peggy M

    2017-05-01

    Understanding the relationship between stress and telomere length (a marker of cellular aging) is of great interest for reducing aging-related disease and death. One important aspect of acute stress exposure that may underlie detrimental effects on health is physiological reactivity to the stressor. This study tested the relationship between buccal telomere length and physiological reactivity (salivary cortisol reactivity and total output, heart rate (HR) variability, blood pressure, and HR) to an acute psychosocial stressor in a sample of 77 (53% male) healthy young adults. Consistent with predictions, greater reductions in HR variability (HRV) in response to a stressor and greater cortisol output during the study session were associated with shorter relative buccal telomere length (i.e. greater cellular aging). However, the relationship between cortisol output and buccal telomere length became non-significant when adjusting for medication use. Contrary to past findings and study hypotheses, associations between cortisol, blood pressure, and HR reactivity and relative buccal telomere length were not significant. Overall, these findings may indicate there are limited and mixed associations between stress reactivity and telomere length across physiological systems.

  2. VARIABILITY OF LENGTH OF STEM OF DETERMINATE AND INDETERMINATE CULTIVARS OF COMMON VETCH (VICIA SATIVA L. SSP. SATIVA AND ITS IMPACT ON SELECTED CROPPING FEATURES

    Directory of Open Access Journals (Sweden)

    Jadwiga ANDRZEJEWSKA

    2006-12-01

    Full Text Available In the years 2001 and 2002, the study was conducted in six experiments in order to examine the conditioning of the length of stem variability and its impact on cropping features of determinate and indeterminate cultivars of common vetch. Rainfall in June and July as well as during the whole growing season was positively correlated with length of stem, but negatively correlated with seed yield, to a larger extent in the group of indeterminate cultivars than in the determinate one. Duration of blooming stage, length of stem, and seed yield showed the largest variability in both groups. Increase in length of stem of plants of indeterminate cultivars led to the delay in maturation, to less even maturation, and to the decrease in the thousand seed weight and seed yield. Increase in length of stem of plants of determinate cultivars delayed reaching the phase of technical maturation and decreased evenness of plant maturation. Determinate growth of common vetch did not lead to the reduction of lodging.

  3. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  4. Variability in word reading performance of dyslexic readers: effects of letter length, phoneme length and digraph presence

    NARCIS (Netherlands)

    Marinus, E.; de Jong, P.F.

    2010-01-01

    The marked word-length effect in dyslexic children suggests the use of a letter-by-letter reading strategy. Such a strategy should make it more difficult to infer the sound of digraphs. Our main aim was to disentangle length and digraph-presence effects in word and pseudoword reading. In addition,

  5. Variable RF capacitor based on a-Si:H (P-doped) multi-length cantilevers

    International Nuclear Information System (INIS)

    Fu, Y Q; Milne, S B; Luo, J K; Flewitt, A J; Wang, L; Miao, J M; Milne, W I

    2006-01-01

    A variable RF capacitor with a-Si:H (doped with phosphine) cantilevers as the top electrode were designed and fabricated. Because the top multi-cantilever electrodes have different lengths, increasing the applied voltage pulled down the cantilever beams sequentially, thus realizing a gradual increase of the capacitance with the applied voltage. A high-k material, H f O 2 , was used as an insulating layer to increase the tuning range of the capacitance. The measured capacitance from the fabricated capacitor was much lower and the pull-in voltage was much higher than those from theoretical analysis because of incomplete contact of the two electrodes, existence of film differential stresses and charge injection effect. Increase of sweeping voltage rate could significantly shift the pull-in voltage to higher values due to the charge injection mechanisms

  6. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  7. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  8. Torsion of the bar of the round transverse section from the variable on length and the transverse section porosity

    Directory of Open Access Journals (Sweden)

    Shlyakhov S.M.

    2017-06-01

    Full Text Available The present article is devoted to the task of finding of level of the secondary tangent voltages arising in sections because of a variable on porosity length. The decision of such task will allow to consider secondary tangent voltages in case of determination of bearing capacity of a porous bar. Distribution of porosity on a transverse section is set rationally - pro-ceeding from early the solved tasks on selection of porosity in case of torsion of a bar of a round transverse section, on bar length – under the linear law. A research objective is to determine the level of secondary tangent voltages and to evaluate from value.

  9. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  10. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  11. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  12. Some new quasi-twisted ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2015-09-01

    Full Text Available Let [n, k, d]_q code be a linear code of length n, dimension k and minimum Hamming distance d over GF(q. One of the basic and most important problems in coding theory is to construct codes with best possible minimum distances. In this paper seven quasi-twisted ternary linear codes are constructed. These codes are new and improve the best known lower bounds on the minimum distance in [6].

  13. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  14. Deterministic Quantum Secure Direct Communication with Dense Coding and Continuous Variable Operations

    International Nuclear Information System (INIS)

    Han Lianfang; Chen Yueming; Yuan Hao

    2009-01-01

    We propose a deterministic quantum secure direct communication protocol by using dense coding. The two check photon sequences are used to check the securities of the channels between the message sender and the receiver. The continuous variable operations instead of the usual discrete unitary operations are performed on the travel photons so that the security of the present protocol can be enhanced. Therefore some specific attacks such as denial-of-service attack, intercept-measure-resend attack and invisible photon attack can be prevented in ideal quantum channel. In addition, the scheme is still secure in noise channel. Furthermore, this protocol has the advantage of high capacity and can be realized in the experiment. (general)

  15. Paraxial design of an optical element with variable focal length and fixed position of principal planes.

    Science.gov (United States)

    Mikš, Antonín; Novák, Pavel

    2018-05-10

    In this article, we analyze the problem of the paraxial design of an active optical element with variable focal length, which maintains the positions of its principal planes fixed during the change of its optical power. Such optical elements are important in the process of design of complex optical systems (e.g., zoom systems), where the fixed position of principal planes during the change of optical power is essential for the design process. The proposed solution is based on the generalized membrane tunable-focus fluidic lens with several membrane surfaces.

  16. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  17. Effects of Unpredictable Variable Prenatal Stress (UVPS) on Bdnf DNA Methylation and Telomere Length in the Adult Rat Brain

    Science.gov (United States)

    Blaze, Jennifer; Asok, A.; Moyer, E. L.; Roth, T. L.; Ronca, A. E.

    2015-01-01

    In utero exposure to stress can shape neurobiological and behavioral outcomes in offspring, producing vulnerability to psychopathology later in life. Animal models of prenatal stress likewise have demonstrated long-­-term alterations in brain function and behavioral deficits in offspring. For example, using a rodent model of unpredictable variable prenatal stress (UVPS), in which dams are exposed to unpredictable, variable stress across pregnancy, we have found increased body weight and anxiety-­-like behavior in adult male, but not female, offspring. DNA methylation (addition of methyl groups to cytosines which normally represses gene transcription) and changes in telomere length (TTAGGG repeats on the ends of chromosomes) are two molecular modifications that result from stress and could be responsible for the long-­-term effects of UVPS. Here, we measured methylation of brain-­-derived neurotrophic factor (bdnf), a gene important in development and plasticity, and telomere length in the brains of adult offspring from the UVPS model. Results indicate that prenatally stressed adult males have greater methylation in the medial prefrontal cortex (mPFC) compared to non-­-stressed controls, while females have greater methylation in the ventral hippocampus compared to controls. Further, prenatally stressed males had shorter telomeres than controls in the mPFC. These findings demonstrate the ability of UVPS to produce epigenetic alterations and changes in telomere length across behaviorally-­-relevant brain regions, which may have linkages to the phenotypic outcomes.

  18. Short-term memory coding in children with intellectual disabilities.

    Science.gov (United States)

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  19. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  20. Low Complexity List Decoding for Polar Codes with Multiple CRC Codes

    Directory of Open Access Journals (Sweden)

    Jong-Hwan Kim

    2017-04-01

    Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.

  1. Cellular and circuit mechanisms maintain low spike co-variability and enhance population coding in somatosensory cortex

    Directory of Open Access Journals (Sweden)

    Cheng eLy

    2012-03-01

    Full Text Available The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold nonlinearities dilutes E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The canonical cellular and circuit components of our study suggest that low network variability over a broad range of neural states may generalize across the nervous system.

  2. On the Effects of Heterogeneous Packet Lengths on Network Coding

    DEFF Research Database (Denmark)

    Compta, Pol Torres; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Random linear network coding (RLNC) has been shown to provide increased throughput, security and robustness for the transmission of data through the network. Most of the analysis and the demonstrators have focused on the study of data packets with the same size (number of bytes). This constitutes...

  3. Quantum quasi-cyclic low-density parity-check error-correcting codes

    International Nuclear Information System (INIS)

    Yuan, Li; Gui-Hua, Zeng; Lee, Moon Ho

    2009-01-01

    In this paper, we propose the approach of employing circulant permutation matrices to construct quantum quasicyclic (QC) low-density parity-check (LDPC) codes. Using the proposed approach one may construct some new quantum codes with various lengths and rates of no cycles-length 4 in their Tanner graphs. In addition, these constructed codes have the advantages of simple implementation and low-complexity encoding. Finally, the decoding approach for the proposed quantum QC LDPC is investigated. (general)

  4. Roughness Length Variability over Heterogeneous Surfaces

    Science.gov (United States)

    2010-03-01

    2004), the influence of variable roughness reaches its maximum at the height of local 0z and vanishes at the so- called blending height (Wieringa...the distribution of visibility restrictors such as low clouds, fog, haze, dust, and pollutants . An improved understanding of ABL structure...R. D., B. H. Lynn, A. Boone, W.-K. Tao, and J. Simpson, 2001: The influence of soil moisture, coastline curvature, and land-breeze circulations on

  5. Improvement of genome assembly completeness and identification of novel full-length protein-coding genes by RNA-seq in the giant panda genome.

    Science.gov (United States)

    Chen, Meili; Hu, Yibo; Liu, Jingxing; Wu, Qi; Zhang, Chenglin; Yu, Jun; Xiao, Jingfa; Wei, Fuwen; Wu, Jiayan

    2015-12-11

    High-quality and complete gene models are the basis of whole genome analyses. The giant panda (Ailuropoda melanoleuca) genome was the first genome sequenced on the basis of solely short reads, but the genome annotation had lacked the support of transcriptomic evidence. In this study, we applied RNA-seq to globally improve the genome assembly completeness and to detect novel expressed transcripts in 12 tissues from giant pandas, by using a transcriptome reconstruction strategy that combined reference-based and de novo methods. Several aspects of genome assembly completeness in the transcribed regions were effectively improved by the de novo assembled transcripts, including genome scaffolding, the detection of small-size assembly errors, the extension of scaffold/contig boundaries, and gap closure. Through expression and homology validation, we detected three groups of novel full-length protein-coding genes. A total of 12.62% of the novel protein-coding genes were validated by proteomic data. GO annotation analysis showed that some of the novel protein-coding genes were involved in pigmentation, anatomical structure formation and reproduction, which might be related to the development and evolution of the black-white pelage, pseudo-thumb and delayed embryonic implantation of giant pandas. The updated genome annotation will help further giant panda studies from both structural and functional perspectives.

  6. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  7. Neutron chain length distributions in subcritical systems

    International Nuclear Information System (INIS)

    Nolen, S.D.; Spriggs, G.

    1999-01-01

    In this paper, the authors present the results of the chain-length distribution as a function of k in subcritical systems. These results were obtained from a point Monte Carlo code and a three-dimensional Monte Carlo code, MC++. Based on these results, they then attempt to explain why several of the common neutron noise techniques, such as the Rossi-α and Feynman's variance-to-mean techniques, are difficult to perform in highly subcritical systems using low-efficiency detectors

  8. Numerical Simulations of Finite-Length Effects in Diocotron Modes

    Science.gov (United States)

    Mason, Grant W.; Spencer, Ross L.

    2000-10-01

    Over a decade ago Driscoll and Fine(C. F. Driscoll and K. S. Fine, Phys. Fluids B 2) (6), 1359, June 1990. reported experimental observations of an exponential instability in the self-shielded m=1 diocotron mode for an electron plasma confined in a Malmberg-Penning trap. More recently, Finn et al(John M. Finn, Diego del-Castillo-Negrete and Daniel C. Barnes, Phys. Plasmas 6) (10), 3744, October 1999. have given a theoretical explanation of the instability as a finite-length end effect patterned after an analogy to theory for shallow water fluid vortices. However, in a test case selected for comparison, the growth rate in the experiment exceeds the theoretical value by a factor of two. We present results from a two-dimensional, finite length drift-kinetic code and a fully three-dimensional particle-in-cell code written to explore details of finite-length effects in diocotron modes.

  9. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    Science.gov (United States)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  10. Kneser-Hecke-operators in coding theory

    OpenAIRE

    Nebe, Gabriele

    2005-01-01

    The Kneser-Hecke-operator is a linear operator defined on the complex vector space spanned by the equivalence classes of a family of self-dual codes of fixed length. It maps a linear self-dual code $C$ over a finite field to the formal sum of the equivalence classes of those self-dual codes that intersect $C$ in a codimension 1 subspace. The eigenspaces of this self-adjoint linear operator may be described in terms of a coding-theory analogue of the Siegel $\\Phi $-operator.

  11. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  12. Application of Displacement Height and Surface Roughness Length to Determination Boundary Layer Development Length over Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Xiangju Cheng

    2014-12-01

    Full Text Available One of the most uncertain parameters in stepped spillway design is the length (from the crest of boundary layer development. The normal velocity profiles responding to the steps as bed roughness are investigated in the developing non-aerated flow region. A detailed analysis of the logarithmic vertical velocity profiles on stepped spillways is conducted through experimental data to verify the computational code and numerical experiments to expand the data available. To determine development length, the hydraulic roughness and displacement thickness, along with the shear velocity, are needed. This includes determining displacement height d and surface roughness length z0 and the relationship of d and z0 to the step geometry. The results show that the hydraulic roughness height ks is the primary factor on which d and z0 depend. In different step height, step width, discharge and intake Froude number, the relations d/ks = 0.22–0.27, z0/ks = 0.06–0.1 and d/z0 = 2.2–4 result in a good estimate. Using the computational code and numerical experiments, air inception will occur over stepped spillway flow as long as the Bauer-defined boundary layer thickness is between 0.72 and 0.79.

  13. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  14. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  15. A highly efficient SDRAM controller supporting variable-length burst access and batch process for discrete reads

    Science.gov (United States)

    Li, Nan; Wang, Junzheng

    2016-03-01

    A highly efficient Synchronous Dynamic Random Access Memory (SDRAM) controller supporting variable-length burst access and batch process for discrete reads is proposed in this paper. Based on the Principle of Locality, command First In First Out (FIFO) and address range detector are designed within this controller to accelerate its responses to discrete read requests, which dramatically improves the average Effective Bus Utilization Ratio (EBUR) of SDRAM. Our controller is finally verified by driving the Micron 256-Mb SDRAM MT48LC16M16A2. Successful simulation and verification results show that our controller exhibits much higher EBUR than do most existing designs in case of discrete reads.

  16. Influence of Code Size Variation on the Performance of 2D Hybrid ZCC/MD in OCDMA System

    Directory of Open Access Journals (Sweden)

    Matem Rima.

    2018-01-01

    Full Text Available Several two dimensional OCDMA have been developed in order to overcome many problems in optical network, enhancing cardinality, suppress Multiple Access Interference (MAI and mitigate Phase Induced Intensity Noise (PIIN. This paper propose a new 2D hybrid ZCC/MD code combining between 1D ZCC spectral encoding where M is its code length and 1D MD spatial spreading where N is its code length. The spatial spreading (N code length offers a good cardinality so it represents the main effect to enhance the performance of the system compared to the spectral (M code length according to the numerical results.

  17. Performance Analysis of an Optical CDMA MAC Protocol With Variable-Size Sliding Window

    Science.gov (United States)

    Mohamed, Mohamed Aly A.; Shalaby, Hossam M. H.; Abdel-Moety El-Badawy, El-Sayed

    2006-10-01

    A media access control protocol for optical code-division multiple-access packet networks with variable length data traffic is proposed. This protocol exhibits a sliding window with variable size. A model for interference-level fluctuation and an accurate analysis for channel usage are presented. Both multiple-access interference (MAI) and photodetector's shot noise are considered. Both chip-level and correlation receivers are adopted. The system performance is evaluated using a traditional average system throughput and average delay. Finally, in order to enhance the overall performance, error control codes (ECCs) are applied. The results indicate that the performance can be enhanced to reach its peak using the ECC with an optimum number of correctable errors. Furthermore, chip-level receivers are shown to give much higher performance than that of correlation receivers. Also, it has been shown that MAI is the main source of signal degradation.

  18. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  19. Variability and transmission by Aphis glycines of North American and Asian Soybean mosaic virus isolates.

    Science.gov (United States)

    Domier, L L; Latorre, I J; Steinlage, T A; McCoppin, N; Hartman, G L

    2003-10-01

    The variability of North American and Asian strains and isolates of Soybean mosaic virus was investigated. First, polymerase chain reaction (PCR) products representing the coat protein (CP)-coding regions of 38 SMVs were analyzed for restriction fragment length polymorphisms (RFLP). Second, the nucleotide and predicted amino acid sequence variability of the P1-coding region of 18 SMVs and the helper component/protease (HC/Pro) and CP-coding regions of 25 SMVs were assessed. The CP nucleotide and predicted amino acid sequences were the most similar and predicted phylogenetic relationships similar to those obtained from RFLP analysis. Neither RFLP nor sequence analyses of the CP-coding regions grouped the SMVs by geographical origin. The P1 and HC/Pro sequences were more variable and separated the North American and Asian SMV isolates into two groups similar to previously reported differences in pathogenic diversity of the two sets of SMV isolates. The P1 region was the most informative of the three regions analyzed. To assess the biological relevance of the sequence differences in the HC/Pro and CP coding regions, the transmissibility of 14 SMV isolates by Aphis glycines was tested. All field isolates of SMV were transmitted efficiently by A. glycines, but the laboratory isolates analyzed were transmitted poorly. The amino acid sequences from most, but not all, of the poorly transmitted isolates contained mutations in the aphid transmission-associated DAG and/or KLSC amino acid sequence motifs of CP and HC/Pro, respectively.

  20. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  1. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  2. Generation of Length Distribution, Length Diagram, Fibrogram, and Statistical Characteristics by Weight of Cotton Blends

    Directory of Open Access Journals (Sweden)

    B. Azzouz

    2007-01-01

    Full Text Available The textile fibre mixture as a multicomponent blend of variable fibres imposes regarding the proper method to predict the characteristics of the final blend. The length diagram and the fibrogram of cotton are generated. Then the length distribution, the length diagram, and the fibrogram of a blend of different categories of cotton are determined. The length distributions by weight of five different categories of cotton (Egyptian, USA (Pima, Brazilian, USA (Upland, and Uzbekistani are measured by AFIS. From these distributions, the length distribution, the length diagram, and the fibrogram by weight of four binary blends are expressed. The length parameters of these cotton blends are calculated and their variations are plotted against the mass fraction x of one component in the blend .These calculated parameters are compared to those of real blends. Finally, the selection of the optimal blends using the linear programming method, based on the hypothesis that the cotton blend parameters vary linearly in function of the components rations, is proved insufficient.

  3. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  4. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  5. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  6. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  7. Synthesizer for decoding a coded short wave length irradiation

    International Nuclear Information System (INIS)

    1976-01-01

    The system uses point irradiation source, typically an X-ray emitter, which illuminates a three dimensional object consisting of a set of parallel planes, each of which acts as a source of coded information. The secondary source images are superimposed on a common flat screen. The decoding system comprises an imput light-screen detector, a picture screen amplifier, a beam deflector, on output picture screen, an optical focussing unit including three lenses, a masking unit, an output light screen detector and a video signal reproduction unit of cathode ray tube from, or similar, to create a three dimensional image of the object. (G.C.)

  8. Performance Analysis of New Binary User Codes for DS-CDMA Communication

    Science.gov (United States)

    Usha, Kamle; Jaya Sankar, Kottareddygari

    2016-03-01

    This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.

  9. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    Science.gov (United States)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  10. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  11. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    Science.gov (United States)

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  12. High-radix transforms for Reed-Solomon codes over Fermat primes

    Science.gov (United States)

    Liu, K. Y.; Reed, I. S.; Truong, T. K.

    1977-01-01

    A method is proposed to streamline the transform decoding algorithm for Reed-Solomon (RS) codes of length equal to 2 raised to the power 2n. It is shown that a high-radix fast Fourier transform (FFT) type algorithm with generator equal to 3 on GF(F sub n), where F sub n is a Fermat prime, can be used to decode RS codes of this length. For a 256-symbol RS code, a radix 4 and radix 16 FFT over GF(F sub 3) require, respectively, 30 and 70% fewer modulo F sub n multiplications than the usual radix 2 FFT.

  13. Coding chaotic billiards. Pt. 3

    International Nuclear Information System (INIS)

    Ullmo, D.; Giannoni, M.J.

    1993-01-01

    Non-tiling compact billiard defined on the pseudosphere is studied 'a la Morse coding'. As for most bounded systems, the coding is non exact. However, two sets of approximate grammar rules can be obtained, one specifying forbidden codes, and the other allowed ones. In-between some sequences remain in the 'unknown' zone, but their relative amount can be reduced to zero if one lets the length of the approximate grammar rules goes to infinity. The relationship between these approximate grammar rules and the 'pruning front' introduced by Cvitanovic et al. is discussed. (authors). 13 refs., 10 figs., 1 tab

  14. Balanced Reed-Solomon codes for all parameters

    KAUST Repository

    Halbawi, Wael; Liu, Zihan; Hassibi, Babak

    2016-01-01

    We construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes with any length n and dimension k. By sparsest, we mean that each row has the least possible number of nonzeros, while balanced means that the number of nonzeros in any two columns differs by at most one. Codes allowing such encoding schemes are useful in distributed settings where computational load-balancing is critical. The problem was first studied by Dau et al. who showed, using probabilistic arguments, that there always exists an MDS code over a sufficiently large field such that its generator matrix is both sparsest and balanced. Motivated by the need for an explicit construction with efficient decoding, the authors of the current paper showed that the generator matrix of a cyclic Reed-Solomon code of length n and dimension k can always be transformed to one that is both sparsest and balanced, when n and k are such that k/n (n-k+1) is an integer. In this paper, we lift this condition and construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes for any set of parameters.

  15. Balanced Reed-Solomon codes for all parameters

    KAUST Repository

    Halbawi, Wael

    2016-10-27

    We construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes with any length n and dimension k. By sparsest, we mean that each row has the least possible number of nonzeros, while balanced means that the number of nonzeros in any two columns differs by at most one. Codes allowing such encoding schemes are useful in distributed settings where computational load-balancing is critical. The problem was first studied by Dau et al. who showed, using probabilistic arguments, that there always exists an MDS code over a sufficiently large field such that its generator matrix is both sparsest and balanced. Motivated by the need for an explicit construction with efficient decoding, the authors of the current paper showed that the generator matrix of a cyclic Reed-Solomon code of length n and dimension k can always be transformed to one that is both sparsest and balanced, when n and k are such that k/n (n-k+1) is an integer. In this paper, we lift this condition and construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes for any set of parameters.

  16. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  17. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  18. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  19. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  20. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  1. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Marc A. Armand

    2008-07-01

    Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over ℤ2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  2. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  3. Length dependent properties of SNS microbridges

    International Nuclear Information System (INIS)

    Sauvageau, J.E.; Jain, R.K.; Li, K.; Lukens, J.E.; Ono, R.H.

    1985-01-01

    Using an in-situ, self-aligned deposition scheme, arrays of variable length SNS junctions in the range of 0.05 μm to 1 μm have been fabricated. Arrays of SNS microbridges of lead-copper and niobium-copper fabricated using this technique have been used to study the length dependence, at constant temperature, of the critical current I and bridge resistance R /SUB d/ . For bridges with lengths pounds greater than the normal metal coherence length xi /SUB n/ (T), the dependence of I /SUB c/ on L is consistent with an exponential dependence on the reduced length l=L/xi /SUB n/ (T). For shorter bridges, deviations from this behavior is seen. It was also found that the bridge resistance R /SUB d/ does not vary linearly with the geometric bridge length but appears to approach a finite value as L→O

  4. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  5. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media

    International Nuclear Information System (INIS)

    Holford, D.J.

    1994-01-01

    This document is a user's manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water

  6. On the total number of genes and their length distribution in complete microbial genomes

    DEFF Research Database (Denmark)

    Skovgaard, M; Jensen, L J; Brunak, S

    2001-01-01

    In sequenced microbial genomes, some of the annotated genes are actually not protein-coding genes, but rather open reading frames that occur by chance. Therefore, the number of annotated genes is higher than the actual number of genes for most of these microbes. Comparison of the length distribut......In sequenced microbial genomes, some of the annotated genes are actually not protein-coding genes, but rather open reading frames that occur by chance. Therefore, the number of annotated genes is higher than the actual number of genes for most of these microbes. Comparison of the length...... distribution of the annotated genes with the length distribution of those matching a known protein reveals that too many short genes are annotated in many genomes. Here we estimate the true number of protein-coding genes for sequenced genomes. Although it is often claimed that Escherichia coli has about 4300...... genes, we show that it probably has only approximately 3800 genes, and that a similar discrepancy exists for almost all published genomes....

  7. Utility of telomere length measurements for age determination of humpback whales

    Directory of Open Access Journals (Sweden)

    Morten Tange Olsen

    2014-12-01

    Full Text Available This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae, ranging from 0 to 26 years of age. The results suggested a significant correlation between telomere length and age in humpback whales. However, telomere length was highly variable among individuals of similar age, suggesting that telomere length measured by quantitative PCR is an imprecise determinant of age in humpback whales. The observed variation in individual telomere length was found to be a function of both experimental and biological variability, with the latter perhaps reflecting patterns of inheritance, resource allocation trade-offs, and stochasticity of the marine environment.

  8. Low Complexity Tail-Biting Trellises for Some Extremal Self-Dual Codes

    OpenAIRE

    Olocco , Grégory; Otmani , Ayoub

    2002-01-01

    International audience; We obtain low complexity tail-biting trellises for some extremal self-dual codes for various lengths and fields such as the [12,6,6] ternary Golay code and a [24,12,8] Hermitian self-dual code over GF(4). These codes are obtained from a particular family of cyclic Tanner graphs called necklace factor graphs.

  9. Performance of FSO-OFDM based on BCH code

    Directory of Open Access Journals (Sweden)

    Jiao Xiao-lu

    2016-01-01

    Full Text Available As contrasted with the traditional OOK (on-off key system, FSO-OFDM system can resist the atmospheric scattering and improve the spectrum utilization rate effectively. Due to the instability of the atmospheric channel, the system will be affected by various factors, and resulting in a high BER. BCH code has a good error correcting ability, particularly in the short-length and medium-length code, and its performance is close to the theoretical value. It not only can check the burst errors but also can correct the random errors. Therefore, the BCH code is applied to the system to reduce the system BER. At last, the semi-physical simulation has been conducted with MATLAB. The simulation results show that when the BER is 10-2, the performance of OFDM is superior 4dB compared with OOK. In different weather conditions (extension rain, advection fog, dust days, when the BER is 10-5, the performance of BCH (255,191 channel coding is superior 4~5dB compared with uncoded system. All in all, OFDM technology and BCH code can reduce the system BER.

  10. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  11. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  12. Ultrafast all-optical code-division multiple-access networks

    Science.gov (United States)

    Kwong, Wing C.; Prucnal, Paul R.; Liu, Yanming

    1992-12-01

    In optical code-division multiple access (CDMA), the architecture of optical encoders/decoders is another important factor that needs to be considered, besides the correlation properties of those already extensively studied optical codes. The architecture of optical encoders/decoders affects, for example, the amount of power loss and length of optical delays that are associated with code sequence generation and correlation, which, in turn, affect the power budget, size, and cost of an optical CDMA system. Various CDMA coding architectures are studied in the paper. In contrast to the encoders/decoders used in prime networks (i.e., prime encodes/decoders), which generate, select, and correlate code sequences by a parallel combination of fiber-optic delay-lines, and in 2n networks (i.e., 2n encoders/decoders), which generate and correlate code sequences by a serial combination of 2 X 2 passive couplers and fiber delays with sequence selection performed in a parallel fashion, the modified 2n encoders/decoders generate, select, and correlate code sequences by a serial combination of directional couplers and delays. The power and delay- length requirements of the modified 2n encoders/decoders are compared to that of the prime and 2n encoders/decoders. A 100 Mbit/s optical CDMA experiment in free space demonstrating the feasibility of the all-serial coding architecture using a serial combination of 50/50 beam splitters and retroreflectors at 10 Tchip/s (i.e., 100,000 chip/bit) with 100 fs laser pulses is reported.

  13. System verification and validation report for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the Verification and Validation Report for the TMAD code system, which includes the TMAD code and the LIBMAKR Code. The TMAD code was commissioned to facilitate the interpretation of moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, can also be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code will then interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data. The primary purpose of this document is to provide the results of the system testing and the conclusions based thereon. The results of the testing process are documented in the body of the report. Appendix A gives the test plan, including test procedures, used in conducting the tests. Appendix B lists the input data required to conduct the tests, and Appendices C and 0 list the numerical results of the tests

  14. Paracantor: A two group, two region reactor code

    Energy Technology Data Exchange (ETDEWEB)

    Stone, Stuart

    1956-07-01

    Paracantor I a two energy group, two region, time independent reactor code, which obtains a closed solution for a critical reactor assembly. The code deals with cylindrical reactors of finite length and with a radial reflector of finite thickness. It is programmed for the 1.B.M: Magnetic Drum Data-Processing Machine, Type 650. The limited memory space available does not permit a flux solution to be included in the basic Paracantor code. A supplementary code, Paracantor 11, has been programmed which computes fluxes, .including adjoint fluxes, from the .output of Paracamtor I.

  15. Binary codes with impulse autocorrelation functions for dynamic experiments

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-09-01

    A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)

  16. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  17. TERRA Expression Levels Do Not Correlate With Telomere Length and Radiation Sensitivity in Human Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Alexandra eSmirnova

    2013-05-01

    Full Text Available Mammalian telomeres are transcribed into long non-coding telomeric RNA molecules (TERRA that seem to play a role in the maintenance of telomere stability. In human cells, CpG island promoters drive TERRA transcription and are regulated by methylation. It was suggested that the amount of TERRA may be related to telomere length. To test this hypothesis we measured telomere length and TERRA levels in single clones isolated from five human cell lines: HeLa (cervical carcinoma, BRC-230 (breast cancer, AKG and GK2 (gastric cancers and GM847 (SV40 immortalized skin fibroblasts. We observed great clonal heterogeneity both in TRF (Terminal Restriction Fragment length and in TERRA levels. However, these two parameters did not correlate with each other. Moreover, cell survival to γ-rays did not show a significant variation among the clones, suggesting that, in this cellular system, the intra-population variability in telomere length and TERRA levels does not influence sensitivity to ionizing radiation. This conclusion was supported by the observation that in a cell line in which telomeres were greatly elongated by the ectopic expression of telomerase, TERRA expression levels and radiation sensitivity were similar to the parental HeLa cell line.

  18. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  19. An Amplitude Spectral Capon Estimator with a Variable Filter Length

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Smaragdis, Paris; Christensen, Mads Græsbøll

    2012-01-01

    The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter...

  20. COMPARATIVE ANALYSIS OF THE METHODS FOR EVALUATING THE EFFECTIVE LENGTH OF COLUMNS

    OpenAIRE

    Paschal Chimeremeze Chiadighikaobi

    2017-01-01

    This article looks into the effective length of columns using different methods. The codes in use in this article are those from the AISC (American Institute of Steel Construction). And that of AS 4100 (Australian Steel code). A conclusion was drawn after investigating a frame using three different methods. Solved Exercise 6 (LeMessurier Method) was investigated using same frame but different dimension. Further analysis and investigation will be done using Java codes to analyze the frames.

  1. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  2. Orthopedics coding and funding.

    Science.gov (United States)

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  3. Linking CATHENA with other computer codes through a remote process

    Energy Technology Data Exchange (ETDEWEB)

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)

    2005-07-01

    starts, ends, controls, receives boundary conditions from CATHENA, calls ELOCA-IST subroutines for computation and sends feedback to CATHENA through PVM calls. The benefit of this dynamic link is that CATHENA's GENeralized Heat Transfer Package (GENHTP) is replaced with a specialized detailed model for CANDU fuel elements. The stand-alone plant conTROL Gentilly-2 (TROLG2) program, developed jointly by AECL and Hydro-Quebec, simulates the control system of the Gentilly-2 generating station operated by Hydro Quebec. The dynamic link with a CATHENA plant idealization couples the thermalhydraulic reactor behavior to reactor control system behavior of the Gentilly-2 generating station plant during transient conditions. CATHENA can perform simulations of CANDU channels by dynamically linking with one or more ELOCA driver programs. Each link to an independent instance of the ELOCA driver program is associated with one fuel element having up to 20 axial nodes (current ELOCA-IST limit) and one circumferential segment. Figure 1 in the full paper shows graphically the data transfers involved in the connection between the CATHENA and ELOCA driver through the PVM interface. Variables transferred from CATHENA to ELOCA-IST at each time step are: number of axial segments; number of circumferential segments (currently one only); coolant pressure; coolant temperature; sheath-to-coolant heat transfer coefficient; thermal radiation heat flux; and, power fraction. Variables that are returned for each axial segment from ELOCA-IST are: fuel sheath temperature; fuel element outer diameter; and, fuel length. CATHENA linked with ELOCA through PVM allows independent development of separate codes and achieves direct coupling during execution ensuring convergence between the codes. This coupling also eliminates the preparation and conversion of data transfer necessary between the codes by an analyst. This coupling process saves analyst time while reducing the possibility of inadvertent errors

  4. Linking CATHENA with other computer codes through a remote process

    International Nuclear Information System (INIS)

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.

    2005-01-01

    , controls, receives boundary conditions from CATHENA, calls ELOCA-IST subroutines for computation and sends feedback to CATHENA through PVM calls. The benefit of this dynamic link is that CATHENA's GENeralized Heat Transfer Package (GENHTP) is replaced with a specialized detailed model for CANDU fuel elements. The stand-alone plant conTROL Gentilly-2 (TROLG2) program, developed jointly by AECL and Hydro-Quebec, simulates the control system of the Gentilly-2 generating station operated by Hydro Quebec. The dynamic link with a CATHENA plant idealization couples the thermalhydraulic reactor behavior to reactor control system behavior of the Gentilly-2 generating station plant during transient conditions. CATHENA can perform simulations of CANDU channels by dynamically linking with one or more ELOCA driver programs. Each link to an independent instance of the ELOCA driver program is associated with one fuel element having up to 20 axial nodes (current ELOCA-IST limit) and one circumferential segment. Figure 1 in the full paper shows graphically the data transfers involved in the connection between the CATHENA and ELOCA driver through the PVM interface. Variables transferred from CATHENA to ELOCA-IST at each time step are: number of axial segments; number of circumferential segments (currently one only); coolant pressure; coolant temperature; sheath-to-coolant heat transfer coefficient; thermal radiation heat flux; and, power fraction. Variables that are returned for each axial segment from ELOCA-IST are: fuel sheath temperature; fuel element outer diameter; and, fuel length. CATHENA linked with ELOCA through PVM allows independent development of separate codes and achieves direct coupling during execution ensuring convergence between the codes. This coupling also eliminates the preparation and conversion of data transfer necessary between the codes by an analyst. This coupling process saves analyst time while reducing the possibility of inadvertent errors and additionally

  5. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  6. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  7. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  8. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  9. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  10. Common-Message Broadcast Channels with Feedback in the Nonasymptotic Regime: Full Feedback

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2018-01-01

    We investigate the maximum coding rate achievable on a two-user broadcast channel for the case where a common message is transmitted with feedback using either fixed-blocklength codes or variable-length codes. For the fixed-blocklength-code setup, we establish nonasymptotic converse and achievabi......We investigate the maximum coding rate achievable on a two-user broadcast channel for the case where a common message is transmitted with feedback using either fixed-blocklength codes or variable-length codes. For the fixed-blocklength-code setup, we establish nonasymptotic converse...... and achievability bounds. An asymptotic analysis of these bounds reveals that feedback improves the second-order term compared to the no-feedback case. In particular, for a certain class of anti-symmetric broadcast channels, we show that the dispersion is halved. For the variable-length-code setup, we demonstrate...

  11. Short-Term Memory Coding in Children With Intellectual Disabilities

    OpenAIRE

    Henry, L.; Conners, F.

    2008-01-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these gr...

  12. COMPARATIVE ANALYSIS OF THE METHODS FOR EVALUATING THE EFFECTIVE LENGTH OF COLUMNS

    Directory of Open Access Journals (Sweden)

    Paschal Chimeremeze Chiadighikaobi

    2017-08-01

    Full Text Available This article looks into the effective length of columns using different methods. The codes in use in this article are those from the AISC (American Institute of Steel Construction. And that of AS 4100 (Australian Steel code. A conclusion was drawn after investigating a frame using three different methods. Solved Exercise 6 (LeMessurier Method was investigated using same frame but different dimension. Further analysis and investigation will be done using Java codes to analyze the frames.

  13. [Myopia: frequency of lattice degeneration and axial length].

    Science.gov (United States)

    Martín Sánchez, M D; Roldán Pallarés, M

    2001-05-01

    To evaluate the relationship between lattice retinal degeneration and axial length of the eye in different grades of myopia. A sample of 200 eyes from 124 myopic patients was collected by chance. The average age was 34.8 years (20-50 years) and the myopia was between 0.5 and 20 diopters (D). The eyes were grouped according to the degree of refraction defect, the mean axial length of each group (Scan A) and the frequency of lattice retinal degeneration and the relationship between these variables was studied. The possible influence of age on our results was also considered. For the statistical analysis, the SAS 6.07 program with the variance analysis for quantitative variables, and chi(2) test for qualitative variables with a 5% significance were used. A multivariable linear regression model was also adjusted. The highest frequency of lattice retinal degeneration occurred in those myopia patients having more than 15 D, and also in the group of myopia patients between 3 and 6 D, but this did not show statistical significance when compared with the other myopic groups. If the axial length is assessed, a greater frequency of lattice retinal degeneration is also found when the axial length is 25-27 mm and 29-30 mm, which correspond, respectively, to myopias between 3-10 D and more than 15 D. When the multivariable linear regression model was adjusted, the axial length showed the existence of lattice retinal degeneration (beta 0.41 mm; p=0.08) adjusted by the number of diopters (beta 0.38 mm; plattice retinal degeneration was found for myopias with axial eye length between 29-30 mm (more than 15 D), and 25-27 mm (between 3-10 D).

  14. Validation of favor code linear elastic fracture solutions for finite-length flaw geometries

    International Nuclear Information System (INIS)

    Dickson, T.L.; Keeney, J.A.; Bryson, J.W.

    1995-01-01

    One of the current tasks within the US Nuclear Regulatory Commission (NRC)-funded Heavy Section Steel Technology Program (HSST) at Oak Ridge National Laboratory (ORNL) is the continuing development of the FAVOR (Fracture, analysis of Vessels: Oak Ridge) computer code. FAVOR performs structural integrity analyses of embrittled nuclear reactor pressure vessels (RPVs) with stainless steel cladding, to evaluate compliance with the applicable regulatory criteria. Since the initial release of FAVOR, the HSST program has continued to enhance the capabilities of the FAVOR code. ABAQUS, a nuclear quality assurance certified (NQA-1) general multidimensional finite element code with fracture mechanics capabilities, was used to generate a database of stress-intensity-factor influence coefficients (SIFICs) for a range of axially and circumferentially oriented semielliptical inner-surface flaw geometries applicable to RPVs with an internal radius (Ri) to wall thickness (w) ratio of 10. This database of SIRCs has been incorporated into a development version of FAVOR, providing it with the capability to perform deterministic and probabilistic fracture analyses of RPVs subjected to transients, such as pressurized thermal shock (PTS), for various flaw geometries. This paper discusses the SIFIC database, comparisons with other investigators, and some of the benchmark verification problem specifications and solutions

  15. Validation of the Danish 7-day pre-coded food diary among adults: energy intake v. energy expenditure and recording length

    DEFF Research Database (Denmark)

    Biltoft-Jensen, Anja Pia; Matthiessen, Jeppe; Rasmussen, Lone Banke

    2009-01-01

    Under-reporting of energy intake (EI) is a well-known problem when measuring dietary intake in free-living populations. The present study aimed at quantifying misreporting by comparing EI estimated from the Danish pre-coded food diary against energy expenditure (EE) measured with a validated...... position-and-motion instrument (ActiReg®). Further, the influence of recording length on EI:BMR, percentage consumers, the number of meal occasions and recorded food items per meal was examined. A total of 138 Danish volunteers aged 20–59 years wore the ActiReg® and recorded their food intake for 7...... for EI and EE were − 6·29 and 3·09 MJ/d. Of the participants, 73 % were classified as acceptable reporters, 26 % as under-reporters and 1 % as over-reporters. EI:BMR was significantly lower on 1–3 consecutive recording days compared with 4–7 recording days (P food...

  16. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  17. Jointly Decoded Raptor Codes: Analysis and Design for the BIAWGN Channel

    Directory of Open Access Journals (Sweden)

    Venkiah Auguste

    2009-01-01

    Full Text Available Abstract We are interested in the analysis and optimization of Raptor codes under a joint decoding framework, that is, when the precode and the fountain code exchange soft information iteratively. We develop an analytical asymptotic convergence analysis of the joint decoder, derive an optimization method for the design of efficient output degree distributions, and show that the new optimized distributions outperform the existing ones, both at long and moderate lengths. We also show that jointly decoded Raptor codes are robust to channel variation: they perform reasonably well over a wide range of channel capacities. This robustness property was already known for the erasure channel but not for the Gaussian channel. Finally, we discuss some finite length code design issues. Contrary to what is commonly believed, we show by simulations that using a relatively low rate for the precode , we can improve greatly the error floor performance of the Raptor code.

  18. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  19. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  20. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  1. HIV1 V3 loop hypermutability is enhanced by the guanine usage bias in the part of env gene coding for it.

    Science.gov (United States)

    Khrustalev, Vladislav Victorovich

    2009-01-01

    Guanine is the most mutable nucleotide in HIV genes because of frequently occurring G to A transitions, which are caused by cytosine deamination in viral DNA minus strands catalyzed by APOBEC enzymes. Distribution of guanine between three codon positions should influence the probability for G to A mutation to be nonsynonymous (to occur in first or second codon position). We discovered that nucleotide sequences of env genes coding for third variable regions (V3 loops) of gp120 from HIV1 and HIV2 have different kinds of guanine usage biases. In the HIV1 reference strain and 100 additionally analyzed HIV1 strains the guanine usage bias in V3 loop coding regions (2G>1G>3G) should lead to elevated nonsynonymous G to A transitions occurrence rates. In the HIV2 reference strain and 100 other HIV2 strains guanine usage bias in V3 loop coding regions (3G>2G>1G) should protect V3 loops from hypermutability. According to the HIV1 and HIV2 V3 alignment, insertion of the sequence enriched with 2G (21 codons in length) occurred during the evolution of HIV1 predecessor, while insertion of the different sequence enriched with 3G (19 codons in length) occurred during the evolution of HIV2 predecessor. The higher is the level of 3G in the V3 coding region, the lower should be the immune escaping mutation occurrence rates. This hypothesis was tested in this study by comparing the guanine usage in V3 loop coding regions from HIV1 fast and slow progressors. All calculations have been performed by our algorithms "VVK In length", "VVK Dinucleotides" and "VVK Consensus" (www.barkovsky.hotmail.ru).

  2. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  3. Performance of RC columns with partial length corrosion

    International Nuclear Information System (INIS)

    Wang Xiaohui; Liang Fayun

    2008-01-01

    Experimental and analytical studies on the load capacity of reinforced concrete (RC) columns with partial length corrosion are presented, where only a fraction of the column length was corroded. Twelve simply supported columns were eccentrically loaded. The primary variables were partial length corrosion in tensile or compressive zone and the corrosion level within this length. The failure of the corroded column occurs in the partial length, mainly developed from or located nearby or merged with the longitudinal corrosion cracks. For RC column with large eccentricity, load capacity of the column is mainly influenced by the partial length corrosion in tensile zone; while for RC column with small eccentricity, load capacity of the column greatly decreases due to the partial length corrosion in compressive zone. The destruction of the longitudinally mechanical integrality of the column in the partial length leads to this great reduction of the load capacity of the RC column

  4. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  5. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  6. Fast comparison of IS radar code sequences for lag profile inversion

    Directory of Open Access Journals (Sweden)

    M. S. Lehtinen

    2008-08-01

    Full Text Available A fast method for theoretically comparing the posteriori variances produced by different phase code sequences in incoherent scatter radar (ISR experiments is introduced. Alternating codes of types 1 and 2 are known to be optimal for selected range resolutions, but the code sets are inconveniently long for many purposes like ground clutter estimation and in cases where coherent echoes from lower ionospheric layers are to be analyzed in addition to standard F-layer spectra.

    The method is used in practice for searching binary code quads that have estimation accuracy almost equal to that of much longer alternating code sets. Though the code sequences can consist of as few as four different transmission envelopes, the lag profile estimation variances are near to the theoretical minimum. Thus the short code sequence is equally good as a full cycle of alternating codes with the same pulse length and bit length. The short code groups cannot be directly decoded, but the decoding is done in connection with more computationally expensive lag profile inversion in data analysis.

    The actual code searches as well as the analysis and real data results from the found short code searches are explained in other papers sent to the same issue of this journal. We also discuss interesting subtle differences found between the different alternating codes by this method. We assume that thermal noise dominates the incoherent scatter signal.

  7. Performance of Product Codes and Related Structures with Iterated Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2011-01-01

    Several modifications of product codes have been suggested as standards for optical networks. We show that the performance exhibits a threshold that can be estimated from a result about random graphs. For moderate input bit error probabilities, the output error rates for codes of finite length can...

  8. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  9. Rate-Compatible Protograph LDPC Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  10. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    Science.gov (United States)

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  11. Development of the Heated Length Correction Factor

    International Nuclear Information System (INIS)

    Park, Ho-Young; Kim, Kang-Hoon; Nahm, Kee-Yil; Jung, Yil-Sup; Park, Eung-Jun

    2008-01-01

    The Critical Heat Flux (CHF) on a nuclear fuel is defined by the function of flow channel geometry and flow condition. According to the selection of the explanatory variable, there are three hypotheses to explain CHF at uniformly heated vertical rod (inlet condition hypothesis, exit condition hypothesis, local condition hypothesis). For inlet condition hypothesis, CHF is characterized by function of system pressure, rod diameter, rod length, mass flow and inlet subcooling. For exit condition hypothesis, exit quality substitutes for inlet subcooling. Generally the heated length effect on CHF in exit condition hypothesis is smaller than that of other variables. Heated length is usually excluded in local condition hypothesis to describe the CHF with only local fluid conditions. Most of commercial plants currently use the empirical CHF correlation based on local condition hypothesis. Empirical CHF correlation is developed by the method of fitting the selected sensitive local variables to CHF test data using the multiple non-linear regression. Because this kind of method can not explain physical meaning, it is difficult to reflect the proper effect of complex geometry. So the recent CHF correlation development strategy of nuclear fuel vendor is making the basic CHF correlation which consists of basic flow variables (local fluid conditions) at first, and then the geometrical correction factors are compensated additionally. Because the functional forms of correction factors are determined from the independent test data which represent the corresponding geometry separately, it can be applied to other CHF correlation directly only with minor coefficient modification

  12. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    Science.gov (United States)

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in

  13. Constructing snake-in-the-box codes and families of such codes covering the hypercube

    NARCIS (Netherlands)

    Haryanto, L.

    2007-01-01

    A snake-in-the-box code (or snake) is a list of binary words of length n such that each word differs from its successor in the list in precisely one bit position. Moreover, any two words in the list differ in at least two positions, unless they are neighbours in the list. The list is considered to

  14. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  15. Sperm length evolution in the fungus-growing ants

    DEFF Research Database (Denmark)

    Baer, B.; Dijkstra, M. B.; Mueller, U. G.

    2009-01-01

    -growing ants, representing 9 of the 12 recognized genera, and mapped these onto the ant phylogeny. We show that average sperm length across species is highly variable and decreases with mature colony size in basal genera with singly mated queens, suggesting that sperm production or storage constraints affect...... the evolution of sperm length. Sperm length does not decrease further in multiply mating leaf-cutting ants, despite substantial further increases in colony size. In a combined analysis, sexual dimorphism explained 63.1% of the variance in sperm length between species. As colony size was not a significant...... predictor in this analysis, we conclude that sperm production trade-offs in males have been the major selective force affecting sperm length across the fungus-growing ants, rather than storage constraints in females. The relationship between sperm length and sexual dimorphism remained robust...

  16. High performance mixed optical CDMA system using ZCC code and multiband OFDM

    Directory of Open Access Journals (Sweden)

    Nawawi N. M.

    2017-01-01

    Full Text Available In this paper, we have proposed a high performance network design, which is based on mixed optical Code Division Multiple Access (CDMA system using Zero Cross Correlation (ZCC code and multiband Orthogonal Frequency Division Multiplexing (OFDM called catenated OFDM. In addition, we also investigate the related changing parameters such as; effective power, number of user, number of band, code length and code weight. Then we theoretically analyzed the system performance comprehensively while considering up to five OFDM bands. The feasibility of the proposed system architecture is verified via the numerical analysis. The research results demonstrated that our developed modulation solution can significantly enhanced the total number of user; improving up to 80% for five catenated bands compared to traditional optical CDMA system, with the code length equals to 80, transmitted at 622 Mbps. It is also demonstrated that the BER performance strongly depends on number of weight, especially with less number of users. As the number of weight increases, the BER performance is better.

  17. High performance mixed optical CDMA system using ZCC code and multiband OFDM

    Science.gov (United States)

    Nawawi, N. M.; Anuar, M. S.; Junita, M. N.; Rashidi, C. B. M.

    2017-11-01

    In this paper, we have proposed a high performance network design, which is based on mixed optical Code Division Multiple Access (CDMA) system using Zero Cross Correlation (ZCC) code and multiband Orthogonal Frequency Division Multiplexing (OFDM) called catenated OFDM. In addition, we also investigate the related changing parameters such as; effective power, number of user, number of band, code length and code weight. Then we theoretically analyzed the system performance comprehensively while considering up to five OFDM bands. The feasibility of the proposed system architecture is verified via the numerical analysis. The research results demonstrated that our developed modulation solution can significantly enhanced the total number of user; improving up to 80% for five catenated bands compared to traditional optical CDMA system, with the code length equals to 80, transmitted at 622 Mbps. It is also demonstrated that the BER performance strongly depends on number of weight, especially with less number of users. As the number of weight increases, the BER performance is better.

  18. The influence of finite-length flaw effects on PTS analyses

    International Nuclear Information System (INIS)

    Keeney-Walker, J.; Dickson, T.L.

    1993-01-01

    Current licensing issues within the nuclear industry dictate a need to investigate the effects of cladding on the extension of small finite-length cracks near the inside surface of a vessel. Because flaws having depths of the order of the combined clad and heat affected zone thickness dominate the frequency distribution of flaws, their initiation probabilities can govern calculated vessel failure probabilities. Current pressurized-thermal-shock (PTS) analysis computer programs recognize the influence of the inner-surface cladding layer in the heat transfer and stress analysis models, but assume the cladding fracture toughness is the same as that for the base material. The programs do not recognize the influence cladding may have in inhibiting crack initiation and propagation of shallow finite-length surface flaws. Limited experimental data and analyses indicate the cladding can inhibit the propagation of certain shallow flaws. This paper describes an analytical study which was carried out to determine (1) the minimum flaw depth for crack initiation under PTS loading for semicircular surface flaws in a clad reactor pressure vessel and (2) the impact, in terms of the conditional probability of vessel failure, of using a semicircular surface flaw as the initial flaw and assuming that the flaw cannot propagate in the cladding. The analytical results indicate that for initiation a much deeper critical crack depth is required for the finite-length flaw than for the infinite-length flaw, except for the least severe transient. The minimum flaw depths required for crack initiation from the finite-length flaw analyses were incorporated into a modified version of the OCA-P code. The modified code was applied to the analysis of selected PTS transients, and the results produced a substantial decrease in the conditional probability of failure. This initial study indicates a significant effect on probabilistic fracture analyses by incorporating finite-length flaw results

  19. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  20. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  1. Automatic creation of LabVIEW network shared variables

    International Nuclear Information System (INIS)

    Kluge, T.; Schroeder, H.

    2012-01-01

    We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive experiments for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spread-sheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice. (authors)

  2. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  3. Decoding of interleaved Reed-Solomon codes using improved power decoding

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde ne Nielsen, Johan

    2017-01-01

    We propose a new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n. For m > 2, this is an asymptotic improvement over the previous...... state-of-the-art for all rates, and the first improvement for R > 1/3 in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius....

  4. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corp., Tokyo (Japan)

    2012-07-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  5. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2012-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  6. DNA fingerprinting of Mycobacterium leprae strains using variable number tandem repeat (VNTR) - fragment length analysis (FLA).

    Science.gov (United States)

    Jensen, Ronald W; Rivest, Jason; Li, Wei; Vissa, Varalakshmi

    2011-07-15

    The study of the transmission of leprosy is particularly difficult since the causative agent, Mycobacterium leprae, cannot be cultured in the laboratory. The only sources of the bacteria are leprosy patients, and experimentally infected armadillos and nude mice. Thus, many of the methods used in modern epidemiology are not available for the study of leprosy. Despite an extensive global drug treatment program for leprosy implemented by the WHO, leprosy remains endemic in many countries with approximately 250,000 new cases each year. The entire M. leprae genome has been mapped and many loci have been identified that have repeated segments of 2 or more base pairs (called micro- and minisatellites). Clinical strains of M. leprae may vary in the number of tandem repeated segments (short tandem repeats, STR) at many of these loci. Variable number tandem repeat (VNTR) analysis has been used to distinguish different strains of the leprosy bacilli. Some of the loci appear to be more stable than others, showing less variation in repeat numbers, while others seem to change more rapidly, sometimes in the same patient. While the variability of certain VNTRs has brought up questions regarding their suitability for strain typing, the emerging data suggest that analyzing multiple loci, which are diverse in their stability, can be used as a valuable epidemiological tool. Multiple locus VNTR analysis (MLVA) has been used to study leprosy evolution and transmission in several countries including China, Malawi, the Philippines, and Brazil. MLVA involves multiple steps. First, bacterial DNA is extracted along with host tissue DNA from clinical biopsies or slit skin smears (SSS). The desired loci are then amplified from the extracted DNA via polymerase chain reaction (PCR). Fluorescently-labeled primers for 4-5 different loci are used per reaction, with 18 loci being amplified in a total of four reactions. The PCR products may be subjected to agarose gel electrophoresis to verify the

  7. Axial Length/Corneal Radius of Curvature Ratio and Refractive ...

    African Journals Online (AJOL)

    2017-12-05

    Dec 5, 2017 ... variously described as determined by the ocular biometric variables. There have been many studies on the relationship between refractive error and ocular axial length (AL), anterior chamber depth, corneal radius of curvature (CR), keratometric readings as well as other ocular biometric variables such as ...

  8. Photoluminescence Enhancement of Poly(3-methylthiophene Nanowires upon Length Variable DNA Hybridization

    Directory of Open Access Journals (Sweden)

    Jingyuan Huang

    2018-01-01

    Full Text Available The use of low-dimensional inorganic or organic nanomaterials has advantages for DNA and protein recognition due to their sensitivity, accuracy, and physical size matching. In this research, poly(3-methylthiophene (P3MT nanowires (NWs are electrochemically prepared with dopant followed by functionalization with probe DNA (pDNA sequence through electrostatic interaction. Various lengths of pDNA sequences (10-, 20- and 30-mer are conjugated to the P3MT NWs respectively followed with hybridization with their complementary target DNA (tDNA sequences. The nanoscale photoluminescence (PL properties of the P3MT NWs are studied throughout the whole process at solid state. In addition, the correlation between the PL enhancement and the double helix DNA with various lengths is demonstrated.

  9. Clustering of Beijing genotype Mycobacterium tuberculosis isolates from the Mekong delta in Vietnam on the basis of variable number of tandem repeat versus restriction fragment length polymorphism typing.

    NARCIS (Netherlands)

    Huyen, M.N.; Kremer, K.; Lan, N.T.; Buu, T.N.; Cobelens, F.G.; Tiemersma, E.W.; Haas, P. de; Soolingen, D. van

    2013-01-01

    BACKGROUND: In comparison to restriction fragment length polymorphism (RFLP) typing, variable number of tandem repeat (VNTR) typing is easier to perform, faster and yields results in a simple, numerical format. Therefore, this technique has gained recognition as the new international gold standard

  10. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corporation, Tokyo (Japan)

    2013-10-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  11. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2013-10-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  12. Variable-Period Undulators For Synchrotron Radiation

    Science.gov (United States)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high-energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  13. Variable-Period Undulators for Synchrotron Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  14. Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations

    Directory of Open Access Journals (Sweden)

    Yimeng Zhang

    2013-05-01

    Full Text Available A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.

  15. Securing optical code-division multiple-access networks with a postswitching coding scheme of signature reconfiguration

    Science.gov (United States)

    Huang, Jen-Fa; Meng, Sheng-Hui; Lin, Ying-Chen

    2014-11-01

    The optical code-division multiple-access (OCDMA) technique is considered a good candidate for providing optical layer security. An enhanced OCDMA network security mechanism with a pseudonoise (PN) random digital signals type of maximal-length sequence (M-sequence) code switching to protect against eavesdropping is presented. Signature codes unique to individual OCDMA-network users are reconfigured according to the register state of the controlling electrical shift registers. Examples of signature reconfiguration following state switching of the controlling shift register for both the network user and the eavesdropper are numerically illustrated. Dynamically changing the PN state of the shift register to reconfigure the user signature sequence is shown; this hinders eavesdroppers' efforts to decode correct data sequences. The proposed scheme increases the probability of eavesdroppers committing errors in decoding and thereby substantially enhances the degree of an OCDMA network's confidentiality.

  16. A restructuring proposal based on MELCOR for severe accident analysis code development

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sun Hee; Song, Y. M.; Kim, D. H. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    In order to develop a template based on existing MELCOR code, current data saving and transferring methods used in MELCOR are addressed first. Then a naming convention for the constructed module is suggested and an automatic program to convert old variables into new derived type variables has been developed. Finally, a restructured module for the SPR package has been developed to be applied to MELCOR. The current MELCOR code ensures a fixed-size storage for four different data types, and manages the variable-sized data within the storage limit by storing the data on the stacked packages. It uses pointer to identify the variables between the packages. This technique causes a difficult grasping of the meaning of the variables as well as memory waste. New features of FORTRAN90, however, make it possible to allocate the storage dynamically, and to use the user-defined data type which lead to a restructured module development for the SPR package. An efficient memory treatment and as easy understanding of the code are allowed in this developed module. The validation of the template has been done by comparing the results of the modified code with those from the existing code, and it is confirmed that the results are the same. The template for the SPR package suggested in this report hints the extension of the template to the entire code. It is expected that the template will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models. 3 refs., 15 figs., 16 tabs. (Author)

  17. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  18. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael Mohammed Abdullah

    2016-08-15

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  19. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  20. A restructuring of TF package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Song, Y. M.; Kim, D. H.

    2002-01-01

    TF package which defines some interpolation and extrapolation condition through user defined table has been restructured in MIDAS computer code. To do this, data transferring methods of current MELCOR code are modified and adopted into TF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of the meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of TF package addressed in this paper does module development and subroutine modification, and treats MELGEN which is making restart file as well as MELCOR which is processing calculation. The validation has been done by comparing the results of the modified code with those from the existing code, and it is confirmed that the results are the same. It hints that the similar approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  1. Estimation of genetic variability and selection response for clutch length in dwarf brown-egg layers carrying or not the naked neck gene

    Directory of Open Access Journals (Sweden)

    Tixier-Boichard Michèle

    2003-03-01

    Full Text Available Abstract In order to investigate the possibility of using the dwarf gene for egg production, two dwarf brown-egg laying lines were selected for 16 generations on average clutch length; one line (L1 was normally feathered and the other (L2 was homozygous for the naked neck gene NA. A control line from the same base population, dwarf and segregating for the NA gene, was maintained during the selection experiment under random mating. The average clutch length was normalized using a Box-Cox transformation. Genetic variability and selection response were estimated either with the mixed model methodology, or with the classical methods for calculating genetic gain, as the deviation from the control line, and the realized heritability, as the ratio of the selection response on cumulative selection differentials. Heritability of average clutch length was estimated to be 0.42 ± 0.02, with a multiple trait animal model, whereas the estimates of the realized heritability were lower, being 0.28 and 0.22 in lines L1 and L2, respectively. REML estimates of heritability were found to decline with generations of selection, suggesting a departure from the infinitesimal model, either because a limited number of genes was involved, or their frequencies were changed. The yearly genetic gains in average clutch length, after normalization, were estimated to be 0.37 ± 0.02 and 0.33 ± 0.04 with the classical methods, 0.46 ± 0.02 and 0.43 ± 0.01 with animal model methodology, for lines L1 and L2 respectively, which represented about 30% of the genetic standard deviation on the transformed scale. Selection response appeared to be faster in line L2, homozygous for the NA gene, but the final cumulated selection response for clutch length was not different between the L1 and L2 lines at generation 16.

  2. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  3. Bar Coding and Tracking in Pathology.

    Science.gov (United States)

    Hanna, Matthew G; Pantanowitz, Liron

    2016-03-01

    Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Clustering of Beijing genotype Mycobacterium tuberculosis isolates from the Mekong delta in Vietnam on the basis of variable number of tandem repeat versus restriction fragment length polymorphism typing

    NARCIS (Netherlands)

    Huyen, Mai N. T.; Kremer, Kristin; Lan, Nguyen T. N.; Buu, Tran N.; Cobelens, Frank G. J.; Tiemersma, Edine W.; de Haas, Petra; van Soolingen, Dick

    2013-01-01

    In comparison to restriction fragment length polymorphism (RFLP) typing, variable number of tandem repeat (VNTR) typing is easier to perform, faster and yields results in a simple, numerical format. Therefore, this technique has gained recognition as the new international gold standard in typing of

  5. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    Science.gov (United States)

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  6. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  7. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  8. An investigation into the variables associated with length of hospital stay related to primary cleft lip and palate surgery and alveolar bone grafting.

    Science.gov (United States)

    Izadi, N; Haers, P E

    2012-10-01

    This retrospective study evaluated variables associated with length of stay (LOS) in hospital for 406 admissions of primary cleft lip and palate and alveolus surgery between January 2007 and April 2009. Three patients were treated as day cases, 343 (84%) stayed one night, 48 (12%) stayed 2 nights and 12 (3%) stayed > 2 nights. Poisson regression analysis showed that there was no association between postoperative LOS and age, distance travelled, diagnosis and type of operation, with a p value > 0.2 for all variables. 60/406 patients stayed 2 nights or more postoperatively mostly due to poor pain control and inadequate oral intake. Patients with palate repair were more likely to have postoperative LOS > 1 night, compared to patients with lip repair, p value = 0.011. Four patients (1%), all of whom had undergone cleft palate surgery, were readmitted within 4 weeks of the operation due to respiratory obstruction or haemorrhage. Using logistic regression, evidence showed that these readmissions were related to a longer original postoperative LOS. This study shows that length of stay for primary cleft lip, palate and alveolus surgery can in most cases be limited to one night postoperatively, provided that adequate support can be provided at home. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. The determinants of IPO firm prospectus length in Africa

    Directory of Open Access Journals (Sweden)

    Bruce Hearn

    2013-04-01

    Full Text Available This paper studies the differential impact on IPO firm listing prospectus length from increasing proportions of foreign directors from civil as opposed to common law societies and social elites. Using a unique hand-collected and comprehensive sample of 165 IPO firms from across 18 African countries the evidence suggests that increasing proportions of directors from civil code law countries is associated with shorter prospectuses while the opposite is true for their common law counterparts. Furthermore increasing proportions of directors drawn from elevated social positions in indigenous society is related to increasing prospectus length in North Africa while being insignificant in SSA.

  10. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    Science.gov (United States)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  11. COSINE software development based on code generation technology

    International Nuclear Information System (INIS)

    Ren Hao; Mo Wentao; Liu Shuo; Zhao Guang

    2013-01-01

    The code generation technology can significantly improve the quality and productivity of software development and reduce software development risk. At present, the code generator is usually based on UML model-driven technology, which can not satisfy the development demand of nuclear power calculation software. The feature of scientific computing program was analyzed and the FORTRAN code generator (FCG) based on C# was developed in this paper. FCG can generate module variable definition FORTRAN code automatically according to input metadata. FCG also can generate memory allocation interface for dynamic variables as well as data access interface. FCG was applied to the core and system integrated engine for design and analysis (COSINE) software development. The result shows that FCG can greatly improve the development efficiency of nuclear power calculation software, and reduce the defect rate of software development. (authors)

  12. New features in the design code TLIE

    International Nuclear Information System (INIS)

    van Zeijts, J.

    1993-01-01

    We present features recently installed in the arbitrary-order accelerator design code TLIE. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes

  13. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  14. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  15. Comparative study of IS6110 restriction fragment length polymorphism and variable-number tandem-repeat typing of Mycobacterium tuberculosis isolates in the Netherlands, based on a 5-year nationwide survey

    NARCIS (Netherlands)

    de Beer, Jessica L.; van Ingen, Jakko; de Vries, Gerard; Erkens, Connie; Sebek, Maruschka; Mulder, Arnout; Sloot, Rosa; van den Brandt, Anne-Marie; Enaimi, Mimount; Kremer, Kristin; Supply, Philip; van Soolingen, Dick

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  16. Comparative Study of IS6110 Restriction Fragment Length Polymorphism and Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Isolates in the Netherlands, Based on a 5-Year Nationwide Survey

    NARCIS (Netherlands)

    Beer, J.L. de; Ingen, J. van; Vries, G. de; Erkens, C.; Sebek, M.; Mulder, A.; Sloot, R.; Brandt, A.M. van den; Enaimi, M.; Kremer, K.; Supply, P.; Soolingen, D. van

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  17. Finite-Length Diocotron Modes in a Non-neutral Plasma Column

    Science.gov (United States)

    Walsh, Daniel; Dubin, Daniel

    2017-10-01

    Diocotron modes are 2D distortions of a non-neutral plasma column that propagate azimuthally via E × B drifts. While the infinite-length theory of diocotron modes is well-understood for arbitrary azimuthal mode number l, the finite-length mode frequency is less developed (with some exceptions), and is naturally of relevance to experiments. In this poster, we present an approach to address finite length effects, such as temperature dependence of the mode frequency. We use a bounce-averaged solution to the Vlasov Equation, in which the Vlasov Equation is solved using action-angle variables of the unperturbed Hamiltonian. We write the distribution function as a Fourier series in the bounce-angle variable ψ, keeping only the bounce-averaged term. We demonstrate a numerical solution to this equation for a realistic plasma with a finite Debye Length, compare to the existing l = 1 theory, and discuss possible extensions of the existing theory to l ≠ 1 . Supported by NSF/DOE Partnership Grants PHY1414570 and DESC0002451.

  18. Modification and application of TOUGH2 as a variable-density, saturated-flow code and comparison to SWIFT II results

    International Nuclear Information System (INIS)

    Christian-Frear, T.L.; Webb, S.W.

    1995-01-01

    Human intrusion scenarios at the Waste Isolation Pilot Plant (WIPP) involve penetration of the repository and an underlying brine reservoir by a future borehole. Brine and gas from the brine reservoir and the repository may flow up the borehole and into the overlying Culebra formation, which is saturated with water containing different amounts of dissolved 'solids resulting in a spatially varying density. Current modeling approaches involve perturbing a steady-state Culebra flow field by inflow of gas and/or brine from a breach borehole that has passed through the repository. Previous studies simulating steady-state flow in the Culebra have been done. One specific study by LaVenue et al. (1990) used the SWIFT 2 code, a single-phase flow and transport code, to develop the steady-state flow field. Because gas may also be present in the fluids from the intrusion borehole, a two-phase code such as TOUGH2 can be used to determine the effect that emitted fluids may have on the steady-state Culebra flow field. Thus a comparison between TOUGH2 and SWIFT2 was prompted. In order to compare the two codes and to evaluate the influence of gas on flow in the Culebra, modifications were made to TOUGH2. Modifications were performed by the authors to allow for element-specific values of permeability, porosity, and elevation. The analysis also used a new equation of state module for a water-brine-air mixture, EOS7 (Pruess, 1991), which was developed to simulate variable water densities by assuming a miscible mixture of water and brine phases and allows for element-specific brine concentration in the INCON file

  19. Alignment-free Transcriptomic and Metatranscriptomic Comparison Using Sequencing Signatures with Variable Length Markov Chains.

    Science.gov (United States)

    Liao, Weinan; Ren, Jie; Wang, Kun; Wang, Shun; Zeng, Feng; Wang, Ying; Sun, Fengzhu

    2016-11-23

    The comparison between microbial sequencing data is critical to understand the dynamics of microbial communities. The alignment-based tools analyzing metagenomic datasets require reference sequences and read alignments. The available alignment-free dissimilarity approaches model the background sequences with Fixed Order Markov Chain (FOMC) yielding promising results for the comparison of microbial communities. However, in FOMC, the number of parameters grows exponentially with the increase of the order of Markov Chain (MC). Under a fixed high order of MC, the parameters might not be accurately estimated owing to the limitation of sequencing depth. In our study, we investigate an alternative to FOMC to model background sequences with the data-driven Variable Length Markov Chain (VLMC) in metatranscriptomic data. The VLMC originally designed for long sequences was extended to apply to high-throughput sequencing reads and the strategies to estimate the corresponding parameters were developed. The flexible number of parameters in VLMC avoids estimating the vast number of parameters of high-order MC under limited sequencing depth. Different from the manual selection in FOMC, VLMC determines the MC order adaptively. Several beta diversity measures based on VLMC were applied to compare the bacterial RNA-Seq and metatranscriptomic datasets. Experiments show that VLMC outperforms FOMC to model the background sequences in transcriptomic and metatranscriptomic samples. A software pipeline is available at https://d2vlmc.codeplex.com.

  20. The linear programming bound for binary linear codes

    NARCIS (Netherlands)

    Brouwer, A.E.

    1993-01-01

    Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.

  1. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  2. Implementation of a tree algorithm in MCNP code for nuclear well logging applications.

    Science.gov (United States)

    Li, Fusheng; Han, Xiaogang

    2012-07-01

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Cost-effective sequencing of full-length cDNA clones powered by a de novo-reference hybrid assembly.

    Science.gov (United States)

    Kuroshu, Reginaldo M; Watanabe, Junichi; Sugano, Sumio; Morishita, Shinichi; Suzuki, Yutaka; Kasahara, Masahiro

    2010-05-07

    Sequencing full-length cDNA clones is important to determine gene structures including alternative splice forms, and provides valuable resources for experimental analyses to reveal the biological functions of coded proteins. However, previous approaches for sequencing cDNA clones were expensive or time-consuming, and therefore, a fast and efficient sequencing approach was demanded. We developed a program, MuSICA 2, that assembles millions of short (36-nucleotide) reads collected from a single flow cell lane of Illumina Genome Analyzer to shotgun-sequence approximately 800 human full-length cDNA clones. MuSICA 2 performs a hybrid assembly in which an external de novo assembler is run first and the result is then improved by reference alignment of shotgun reads. We compared the MuSICA 2 assembly with 200 pooled full-length cDNA clones finished independently by the conventional primer-walking using Sanger sequencers. The exon-intron structure of the coding sequence was correct for more than 95% of the clones with coding sequence annotation when we excluded cDNA clones insufficiently represented in the shotgun library due to PCR failure (42 out of 200 clones excluded), and the nucleotide-level accuracy of coding sequences of those correct clones was over 99.99%. We also applied MuSICA 2 to full-length cDNA clones from Toxoplasma gondii, to confirm that its ability was competent even for non-human species. The entire sequencing and shotgun assembly takes less than 1 week and the consumables cost only approximately US$3 per clone, demonstrating a significant advantage over previous approaches.

  4. On the total number of genes and their length distribution in complete microbial genomes

    DEFF Research Database (Denmark)

    Skovgaard, Marie; Jensen, L.J.; Brunak, Søren

    2001-01-01

    In sequenced microbial genomes, some of the annotated genes are actually not protein-coding genes, but rather open reading frames that occur by chance. Therefore, the number of annotated genes is higher than the actual number of genes for most of these microbes. Comparison of the length...... distribution of the annotated genes with the length distribution of those matching a known protein reveals that too many short genes are annotated in many genomes. Here we estimate the true number of protein-coding genes for sequenced genomes. Although it is often claimed that Escherichia coli has about 4300...... genes, we show that it probably has only similar to 3800 genes, and that a similar discrepancy exists for almost all published genomes....

  5. Turbulence closure for mixing length theories

    Science.gov (United States)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  6. Correcting length-frequency distributions for imperfect detection

    Science.gov (United States)

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data

  7. A PCR-based protocol to accurately size C9orf72 intermediate-length alleles.

    Science.gov (United States)

    Biasiotto, Giorgio; Archetti, Silvana; Di Lorenzo, Diego; Merola, Francesca; Paiardi, Giulia; Borroni, Barbara; Alberici, Antonella; Padovani, Alessandro; Filosto, Massimiliano; Bonvicini, Cristian; Caimi, Luigi; Zanella, Isabella

    2017-04-01

    Although large expansions of the non-coding GGGGCC repeat in C9orf72 gene are clearly defined as pathogenic for Amyotrophic Lateral Sclerosis (ALS) and Frontotemporal Lobar Degeneration (FTLD), intermediate-length expansions have also been associated with those and other neurodegenerative diseases. Intermediate-length allele sizing is complicated by intrinsic properties of current PCR-based methodologies, in that somatic mosaicism could be suspected. We designed a protocol that allows the exact sizing of intermediate-length alleles, as well as the identification of large expansions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A restructuring of the CF/EDF packages for the MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S.H.; Kim, K.R.; Kim, D.H.

    2004-01-01

    The CF and EDF packages, which allow the user to define the functions of variables in a database and the usage of an external data file, have been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and a modernized data structure. To restructure the code, the data transferring methods of the current MELCOR code are modified and then partially adopted into the CF/EDF packages. The data structure of the current MELCOR code using FORTRAN77 has a difficulty in grasping the meaning of the variables as pointers are used to define their addresses. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type without pointers leading to an efficient memory treatment and an easy understanding of the code. Restructuring of the CF/EDF packages addressed in this paper includes a module development and subroutine modification. The verification has been done by comparing the results of the modified code with those of the existing code and the trends are almost the same to each other. Therefore the similar approach could be extended to the entire code package for code restructuring. It is expected that the code restructuring will accelerate the code's domestication thanks to a direct understanding of each variable and an easy implementation of the modified or newly developed models. (author)

  9. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  10. Spectral/spatial optical CDMA code based on Diagonal Eigenvalue Unity

    Science.gov (United States)

    Najjar, Monia; Jellali, Nabiha; Ferchichi, Moez; Rezig, Houria

    2017-11-01

    A new two dimensional Diagonal Eigenvalue Unity (2D-DEU) code is developed for the spectral⧹spatial optical code division multiple access (OCDMA) system. It has a lower cross correlation value compared to two dimensional diluted perfect difference (2D-DPD), two dimensional Extended Enhanced Double Weight (2D-Extended-EDW) codes. Also, for the same code length, the number of users can be generated by the 2D-DEU code is higher than that provided by the others codes. The Bit Error Rate (BER) numerical analysis is developed by considering the effects of shot noise, phase induced intensity noise (PIIN), and thermal noise. The main result shows that BER is strongly affected by PIIN for the higher source power. The 2D-DEU code performance is compared with 2D-DPD, 2D-Extended-EDW and two dimensional multi-diagonals (2D-MD) codes. This comparison proves that the proposed 2D-DEU system outperforms the related codes.

  11. A restructuring of COR package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S.H.; Kim, K.R.; Kim, D.H.

    2004-01-01

    The COR package, which calculates the thermal response of the core and the lower plenum internal structures and models the relocation of the core and lower plenum structural materials, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and a modernized data structure. To do this, the data transferring methods of the current MELCOR code are modified and adopted into the COR package. The data structure of the current MELCOR code using FORTRAN77 has a difficulty in grasping the meaning of the variables as well as a waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which leads to an efficient memory treatment and an easy understanding of the code. Restructuring of the COR package addressed in this paper includes a module development, subroutine modification. The verification has been done by comparing the results of the modified code with those of the existing code. As the trends are similar to each other, it implies that the same approach could be extended to the entire code package. It is expected that the code restructuring will accelerated the code's domestication thanks to a direct understanding of each variable and an easy implementation of the modified or newly developed models. (author)

  12. Computer Security: is your code sane?

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane?   Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...

  13. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  14. The amendment of the Labour Code

    Directory of Open Access Journals (Sweden)

    Jana Mervartová

    2012-01-01

    Full Text Available The amendment of the Labour Code, No. 365/2011 Coll., effective as from 1st January 2012, brings some of fundamental changes in labour law. The amendment regulates relation between the Labour Code and the Civil Code; and is also formulates principles of labour law relations newly. The basic period by fixed-term contract of employment is extended and also frequency its conclusion is limited. The length of trial period and the amount of redundancy payment are graduated. An earlier legislative regulation which an employee is temporarily assign to work for different employer has been returned. The number of hours by agreement to perform work is increased. The monetary compensation by competitive clause is reduced. The other changes are realised in part of collective labour law. The authoress of article notifies of the most important changes. She compares new changes of the Labour Code and former legal system and she also evaluates their advantages and disadvantages. The main objective of changes ensures labour law relations to be more flexible. And it should motivate creation of new jobs opening by employers. Amended provisions are aimed to reduction expenses of employers under the reform of the public finances. Also changes are expected in the Labour Code in connection with the further new Civil Code.

  15. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  16. Survey of nuclear fuel-cycle codes

    International Nuclear Information System (INIS)

    Thomas, C.R.; de Saussure, G.; Marable, J.H.

    1981-04-01

    A two-month survey of nuclear fuel-cycle models was undertaken. This report presents the information forthcoming from the survey. Of the nearly thirty codes reviewed in the survey, fifteen of these codes have been identified as potentially useful in fulfilling the tasks of the Nuclear Energy Analysis Division (NEAD) as defined in their FY 1981-1982 Program Plan. Six of the fifteen codes are given individual reviews. The individual reviews address such items as the funding agency, the author and organization, the date of completion of the code, adequacy of documentation, computer requirements, history of use, variables that are input and forecast, type of reactors considered, part of fuel cycle modeled and scope of the code (international or domestic, long-term or short-term, regional or national). The report recommends that the Model Evaluation Team perform an evaluation of the EUREKA uranium mining and milling code

  17. Pulse length assessment of compact ignition tokamak designs

    International Nuclear Information System (INIS)

    Stotler, D.P.; Pomphrey, N.

    1989-07-01

    A time-dependent zero-dimensional code has been developed to assess the pulse length and auxiliary heating requirements of Compact Ignition Tokamak (CIT) designs. By taking a global approach to the calculation, parametric studies can be easily performed. The accuracy of the procedure is tested by comparing with the Tokamak Simulation Code which uses theory-based thermal diffusivities. A series of runs is carried out at various levels of energy confinement for each of three possible CIT configurations. It is found that for cases of interest, ignition or an energy multiplication factor Q /approxreverse arrowgt/ 7 can be attained within the first half of the planned five-second flattop with 10--40 MW of auxiliary heating. These results are supported by analytic calculations. 18 refs., 7 figs., 2 tabs

  18. Rate-compatible protograph LDPC code families with linear minimum distance

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  19. Codon size reduction as the origin of the triplet genetic code.

    Directory of Open Access Journals (Sweden)

    Pavel V Baranov

    Full Text Available The genetic code appears to be optimized in its robustness to missense errors and frameshift errors. In addition, the genetic code is near-optimal in terms of its ability to carry information in addition to the sequences of encoded proteins. As evolution has no foresight, optimality of the modern genetic code suggests that it evolved from less optimal code variants. The length of codons in the genetic code is also optimal, as three is the minimal nucleotide combination that can encode the twenty standard amino acids. The apparent impossibility of transitions between codon sizes in a discontinuous manner during evolution has resulted in an unbending view that the genetic code was always triplet. Yet, recent experimental evidence on quadruplet decoding, as well as the discovery of organisms with ambiguous and dual decoding, suggest that the possibility of the evolution of triplet decoding from living systems with non-triplet decoding merits reconsideration and further exploration. To explore this possibility we designed a mathematical model of the evolution of primitive digital coding systems which can decode nucleotide sequences into protein sequences. These coding systems can evolve their nucleotide sequences via genetic events of Darwinian evolution, such as point-mutations. The replication rates of such coding systems depend on the accuracy of the generated protein sequences. Computer simulations based on our model show that decoding systems with codons of length greater than three spontaneously evolve into predominantly triplet decoding systems. Our findings suggest a plausible scenario for the evolution of the triplet genetic code in a continuous manner. This scenario suggests an explanation of how protein synthesis could be accomplished by means of long RNA-RNA interactions prior to the emergence of the complex decoding machinery, such as the ribosome, that is required for stabilization and discrimination of otherwise weak triplet codon

  20. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  1. Investigation of Navier-Stokes Code Verification and Design Optimization

    Science.gov (United States)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  2. New Channel Coding Methods for Satellite Communication

    Directory of Open Access Journals (Sweden)

    J. Sebesta

    2010-04-01

    Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.

  3. A restructuring of CF package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, K. R.; Kim, D. H.; Cho, S. W.

    2004-01-01

    CF package, which evaluates user-specified 'control functions' and applies them to define or control various aspects of computation, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the CF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory, difficulty is more over because its data is location information of other package's data due to characteristics of CF package. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the CF package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  4. Design and performance analysis for several new classes of codes for optical synchronous CDMA and for arbitrary-medium time-hopping synchronous CDMA communication systems

    Science.gov (United States)

    Kostic, Zoran; Titlebaum, Edward L.

    1994-08-01

    New families of spread-spectrum codes are constructed, that are applicable to optical synchronous code-division multiple-access (CDMA) communications as well as to arbitrary-medium time-hopping synchronous CDMA communications. Proposed constructions are based on the mappings from integer sequences into binary sequences. We use the concept of number theoretic quadratic congruences and a subset of Reed-Solomon codes similar to the one utilized in the Welch-Costas frequency-hop (FH) patterns. The properties of the codes are as good as or better than the properties of existing codes for synchronous CDMA communications: Both the number of code-sequences within a single code family and the number of code families with good properties are significantly increased when compared to the known code designs. Possible applications are presented. To evaluate the performance of the proposed codes, a new class of hit arrays called cyclical hit arrays is recalled, which give insight into the previously unknown properties of the few classes of number theoretic FH patterns. Cyclical hit arrays and the proposed mappings are used to determine the exact probability distribution functions of random variables that represent interference between users of a time-hopping or optical CDMA system. Expressions for the bit error probability in multi-user CDMA systems are derived as a function of the number of simultaneous CDMA system users, the length of signature sequences and the threshold of a matched filter detector. The performance results are compared with the results for some previously known codes.

  5. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  6. Development of a general coupling interface for the fuel performance code transuranus tested with the reactor dynamic code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.

    2013-01-01

    Several institutions plan to couple the fuel performance code TRANSURANUS developed by the European Institute for Transuranium Elements with their own codes. One of these codes is the reactor dynamic code DYN3D maintained by the Helmholtz-Zentrum Dresden - Rossendorf. DYN3D was developed originally for VVER type reactors and was extended later to western type reactors. Usually, the fuel rod behavior is modeled in thermal hydraulics and neutronic codes in a simplified manner. The main idea of this coupling is to describe the fuel rod behavior in the frame of core safety analysis in a more detailed way, e.g. including the influence of the high burn-up structure, geometry changes and fission gas release. It allows to take benefit from the improved computational power and software achieved over the last two decades. The coupling interface was developed in a general way from the beginning. Thence it can be easily used also by other codes for a coupling with TRANSURANUS. The user can choose between a one-way as well as a two-way online coupling option. For a one-way online coupling, DYN3D provides only the time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, but the fuel performance code doesn’t transfer any variable back to DYN3D. In a two-way online coupling, TRANSURANUS in addition transfers parameters like fuel temperature and cladding temperature back to DYN3D. This list of variables can be extended easily by geometric and further variables of interest. First results of the code system DYN3D-TRANSURANUS will be presented for a control rod ejection transient in a modern western type reactor. Pre-analyses show already that a detailed fuel rod behavior modeling will influence the thermal hydraulics and thence also the neutronics due to the Doppler reactivity effect of the fuel temperature. The coupled code system has therefore a potential to improve the assessment of safety criteria. The developed code system DYN3D-TRANSURANUS can be used also

  7. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  8. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    Science.gov (United States)

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  9. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses

    Science.gov (United States)

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-01-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it. PMID:26630202

  10. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  11. ANSPipe: An IBM-PC interactive code for pipe-break assessment

    International Nuclear Information System (INIS)

    Fullwood, R.R.; Harrington, M.

    1988-01-01

    The advanced neutron source (ANS) being designed at Oak Ridge National Laboratory will be the world's highest flux neutron source and best facility for associated basic and applied research. The ANSPipe code was written as an aid for the piping configuration and material selection to enhance safety and availability. The primary calculation is based on the Thomas mode. which models pipe leak or break probabilities as proportional to the length of the segment and diameter and the inverse square of the wall thickness. This scaling, based on experience, is adjusted for radiation effects, using the Regulatory Guide 1.99 model, and for cyclic fatigue, stress corrosion, and inspection, using adaptations form the PRAISE-B code. The key to an ANSPipe analysis is the definition of the pipe segments. A pipe segment is defined as a length of pipe in which all the parameters affecting the pipe are constant or reasonably so. Thus, a segment would be a length of pipe of constant diameter, thickness, material type, internal pressure, flux distribution, stress, and submergence or nonsubmergence

  12. Quantum BCH Codes Based on Spectral Techniques

    International Nuclear Information System (INIS)

    Guo Ying; Zeng Guihua

    2006-01-01

    When the time variable in quantum signal processing is discrete, the Fourier transform exists on the vector space of n-tuples over the Galois field F 2 , which plays an important role in the investigation of quantum signals. By using Fourier transforms, the idea of quantum coding theory can be described in a setting that is much different from that seen that far. Quantum BCH codes can be defined as codes whose quantum states have certain specified consecutive spectral components equal to zero and the error-correcting ability is also described by the number of the consecutive zeros. Moreover, the decoding of quantum codes can be described spectrally with more efficiency.

  13. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  14. Optimization of fracture length in gas/condensate reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, J.; Sharma, M.M.; Pope, G.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas Univ., Austin, TX (United States)

    2006-07-01

    A common practice that improves the productivity of gas-condensate reservoirs is hydraulic fracturing. Two important variables that determine the effectiveness of hydraulic fractures are fracture length and fracture conductivity. Although there are no simple guidelines for the optimization of fracture length and the factors that affect it, it is preferable to have an optimum fracture length for a given proppant volume in order to maximize productivity. An optimization study was presented in which fracture length was estimated at wells where productivity was maximized. An analytical expression that takes into account non-Darcy flow and condensate banking was derived. This paper also reviewed the hydraulic fracturing process and discussed previous simulation studies that investigated the effects of well spacing and fracture length on well productivity in low permeability gas reservoirs. The compositional simulation study and results and discussion were also presented. The analytical expression for optimum fracture length, analytical expression with condensate dropout, and equations for the optimum fracture length with non-Darcy flow in the fracture were included in an appendix. The Computer Modeling Group's GEM simulator, an equation-of-state compositional simulator, was used in this study. It was concluded that for cases with non-Darcy flow, the optimum fracture lengths are lower than those obtained with Darcy flow. 18 refs., 5 tabs., 22 figs., 1 appendix.

  15. Performance of JPEG Image Transmission Using Proposed Asymmetric Turbo Code

    Directory of Open Access Journals (Sweden)

    Siddiqi Mohammad Umar

    2007-01-01

    Full Text Available This paper gives the results of a simulation study on the performance of JPEG image transmission over AWGN and Rayleigh fading channels using typical and proposed asymmetric turbo codes for error control coding. The baseline JPEG algorithm is used to compress a QCIF ( "Suzie" image. The recursive systematic convolutional (RSC encoder with generator polynomials , that is, (13/11 in decimal, and 3G interleaver are used for the typical WCDMA and CDMA2000 turbo codes. The proposed asymmetric turbo code uses generator polynomials , that is, (13/11; 13/9 in decimal, and a code-matched interleaver. The effect of interleaver in the proposed asymmetric turbo code is studied using weight distribution and simulation. The simulation results and performance bound for proposed asymmetric turbo code for the frame length , code rate with Log-MAP decoder over AWGN channel are compared with the typical system. From the simulation results, it is observed that the image transmission using proposed asymmetric turbo code performs better than that with the typical system.

  16. Systematic correlation of environmental exposure and physiological and self-reported behaviour factors with leukocyte telomere length.

    Science.gov (United States)

    Patel, Chirag J; Manrai, Arjun K; Corona, Erik; Kohane, Isaac S

    2017-02-01

    It is hypothesized that environmental exposures and behaviour influence telomere length, an indicator of cellular ageing. We systematically associated 461 indicators of environmental exposures, physiology and self-reported behaviour with telomere length in data from the US National Health and Nutrition Examination Survey (NHANES) in 1999-2002. Further, we tested whether factors identified in the NHANES participants are also correlated with gene expression of telomere length modifying genes. We correlated 461 environmental exposures, behaviours and clinical variables with telomere length, using survey-weighted linear regression, adjusting for sex, age, age squared, race/ethnicity, poverty level, education and born outside the USA, and estimated the false discovery rate to adjust for multiple hypotheses. We conducted a secondary analysis to investigate the correlation between identified environmental variables and gene expression levels of telomere-associated genes in publicly available gene expression samples. After correlating 461 variables with telomere length, we found 22 variables significantly associated with telomere length after adjustment for multiple hypotheses. Of these varaibales, 14 were associated with longer telomeres, including biomarkers of polychlorinated biphenyls([PCBs; 0.1 to 0.2 standard deviation (SD) increase for 1 SD increase in PCB level, P  environmental exposures and chronic disease-related risk factors may play a role in telomere length. Our secondary analysis found no evidence of association between PCBs/smoking and gene expression of telomere-associated genes. All correlations between exposures, behaviours and clinical factors and changes in telomere length will require further investigation regarding biological influence of exposure. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association

  17. A restructuring of RN1 package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, D. H.; Kim, K. R.

    2003-01-01

    RN1 package, which is one of two fission product-related packages in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the RN1 package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN1 package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  18. A restructuring of RN2 package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, D. H.

    2003-01-01

    RN2 package, which is one of two fission product-related package in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the RN2 package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN2 package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The validation has been done by comparing the results of the modified code with those from the existing code. As the trends are the similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  19. Biopsychosocial determinants of pregnancy length and fetal growth.

    Science.gov (United States)

    St-Laurent, Jennifer; De Wals, Philippe; Moutquin, Jean-Marie; Niyonsenga, Theophile; Noiseux, Manon; Czernis, Loretta

    2008-05-01

    The causes and mechanisms related to preterm delivery and intrauterine growth restriction are poorly understood. Our objective was to assess the direct and indirect effects of psychosocial and biomedical factors on the duration of pregnancy and fetal growth. A self-administered questionnaire was distributed to pregnant women attending prenatal ultrasound clinics in nine hospitals in the Montérégie region in the province of Quebec, Canada, from November 1997 to May 1998. Prenatal questionnaires were linked with birth certificates. Theoretical models explaining pregnancy length and fetal growth were developed and tested, using path analysis. In order to reduce the number of variables from the questionnaire, a principal component analysis was performed, and the three most important new dimensions were retained as explanatory variables in the final models. Data were available for 1602 singleton pregnancies. The biophysical score, covering both maternal age and the pre-pregnancy body mass index, was the only variable statistically associated with pregnancy length. Smoking, obstetric history, maternal health and biophysical indices were direct predictors of fetal growth. Perceived stress, social support and self-esteem were not directly related to pregnancy outcomes, but were determinants of smoking and the above-mentioned biomedical variables. More studies are needed to identify the mechanisms by which adverse psychosocial factors are translated into adverse biological effects.

  20. Length and elasticity of side reins affect rein tension at trot.

    Science.gov (United States)

    Clayton, Hilary M; Larson, Britt; Kaiser, LeeAnn J; Lavagnino, Michael

    2011-06-01

    This study investigated the horse's contribution to tension in the reins. The experimental hypotheses were that tension in side reins (1) increases biphasically in each trot stride, (2) changes inversely with rein length, and (3) changes with elasticity of the reins. Eight riding horses trotted in hand at consistent speed in a straight line wearing a bit and bridle and three types of side reins (inelastic, stiff elastic, compliant elastic) were evaluated in random order at long, neutral, and short lengths. Strain gauge transducers (240 Hz) measured minimal, maximal and mean rein tension, rate of loading and impulse. The effects of rein type and length were evaluated using ANOVA with Bonferroni post hoc tests. Rein tension oscillated in a regular pattern with a peak during each diagonal stance phase. Within each rein type, minimal, maximal and mean tensions were higher with shorter reins. At neutral or short lengths, minimal tension increased and maximal tension decreased with elasticity of the reins. Short, inelastic reins had the highest maximal tension and rate of loading. Since the tension variables respond differently to rein elasticity at different lengths, it is recommended that a set of variables representing different aspects of rein tension should be reported. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Ultrasonographic assessment of renal length in 310 Turkish children ...

    African Journals Online (AJOL)

    Ultrasonography is a non-invasive modality that can be used to measure RL.[2] ... cases were selected for inclusion in the study. Ultrasonography was ... Linear regression equations for predicting a variable (renal length) from independent ...

  2. Quantum optical coherence can survive photon losses using a continuous-variable quantum erasure-correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    A fundamental requirement for enabling fault-tolerant quantum information processing is an efficient quantum error-correcting code that robustly protects the involved fragile quantum states from their environment. Just as classical error-correcting codes are indispensible in today's information...... technologies, it is believed that quantum error-correcting code will play a similarly crucial role in tomorrow's quantum information systems. Here, we report on the experimental demonstration of a quantum erasure-correcting code that overcomes the devastating effect of photon losses. Our quantum code is based...... on linear optics, and it protects a four-mode entangled mesoscopic state of light against erasures. We investigate two approaches for circumventing in-line losses, and demonstrate that both approaches exhibit transmission fidelities beyond what is possible by classical means. Because in-line attenuation...

  3. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  4. Leukocyte Telomere Length and Cognitive Function in Older Adults

    Directory of Open Access Journals (Sweden)

    Emily Frith

    2018-04-01

    Full Text Available We evaluated the specific association between leukocyte telomere length and cognitive function among a national sample of the broader U.S. older adult population. Data from the 1999-2002 National Health and Nutrition Examination Survey (NHANES were used to identify 1,722 adults, between 60-85 years, with complete data on selected study variables. DNA was extracted from whole blood via the LTL assay, which is administered using quantitative polymerase chain reaction to measure telomere length relative to standard reference DNA (T/S ratio. Average telomere length was recorded, with two to three assays performed to control for individual variability. The DSST (Digit Symbol Substitution Test was used to assess participant executive cognitive functioning tasks of pairing and free recall. Individuals were excluded if they had been diagnosed with coronary artery disease, congestive heart failure, heart attack or stroke at the baseline assessment. Leukocyte telomere length was associated with higher cognitive performance, independent of gender, race-ethnicity, physical activity status, body mass index and other covariates. In this sample, there was a strong association between LTL and cognition; for every 1 T/S ratio increase in LTL, there was a corresponding 9.9 unit increase in the DSST (β = 9.9; 95% CI: 5.6-14.2; P [JCBPR 2018; 7(1.000: 14-18

  5. A point kernel shielding code, PKN-HP, for high energy proton incident

    Energy Technology Data Exchange (ETDEWEB)

    Kotegawa, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-06-01

    A point kernel integral technique code PKN-HP, and the related thick target neutron yield data have been developed to calculate neutron and secondary gamma-ray dose equivalents in ordinary concrete and iron shields for fully stopping length C, Cu and U-238 target neutrons produced by 100 MeV-10 GeV proton incident in a 3-dimensional geometry. The comparisons among calculation results of the present code and other calculation techniques, and measured values showed the usefulness of the code. (author)

  6. On the subfield subcodes of Hermitian codes

    DEFF Research Database (Denmark)

    Pinero, Fernando; Janwa, Heeralal

    2014-01-01

    We present a fast algorithm using Gröbner basis to compute the dimensions of subfield subcodes of Hermitian codes. With these algorithms we are able to compute the exact values of the dimension of all subfield subcodes up to q ≤ 32 and length up to 215. We show that some of the subfield subcodes ...

  7. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  8. Circular codes revisited: a statistical approach.

    Science.gov (United States)

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A combined paging alert and web-based instrument alters clinician behavior and shortens hospital length of stay in acute pancreatitis.

    Science.gov (United States)

    Dimagno, Matthew J; Wamsteker, Erik-Jan; Rizk, Rafat S; Spaete, Joshua P; Gupta, Suraj; Sahay, Tanya; Costanzo, Jeffrey; Inadomi, John M; Napolitano, Lena M; Hyzy, Robert C; Desmond, Jeff S

    2014-03-01

    There are many published clinical guidelines for acute pancreatitis (AP). Implementation of these recommendations is variable. We hypothesized that a clinical decision support (CDS) tool would change clinician behavior and shorten hospital length of stay (LOS). Observational study, entitled, The AP Early Response (TAPER) Project. Tertiary center emergency department (ED) and hospital. Two consecutive samplings of patients having ICD-9 code (577.0) for AP were generated from the emergency department (ED) or hospital admissions. Diagnosis of AP was based on conventional Atlanta criteria. The Pre-TAPER-CDS-Tool group (5/30/06-6/22/07) had 110 patients presenting to the ED with AP per 976 ICD-9 (577.0) codes and the Post-TAPER-CDS-Tool group (5/30/06-6/22/07) had 113 per 907 ICD-9 codes (7/14/10-5/5/11). The TAPER-CDS-Tool, developed 12/2008-7/14/2010, is a combined early, automated paging-alert system, which text pages ED clinicians about a patient with AP and an intuitive web-based point-of-care instrument, consisting of seven early management recommendations. The pre- vs. post-TAPER-CDS-Tool groups had similar baseline characteristics. The post-TAPER-CDS-Tool group met two management goals more frequently than the pre-TAPER-CDS-Tool group: risk stratification (P6L/1st 0-24 h (P=0.0003). Mean (s.d.) hospital LOS was significantly shorter in the post-TAPER-CDS-Tool group (4.6 (3.1) vs. 6.7 (7.0) days, P=0.0126). Multivariate analysis identified four independent variables for hospital LOS: the TAPER-CDS-Tool associated with shorter LOS (P=0.0049) and three variables associated with longer LOS: Japanese severity score (P=0.0361), persistent organ failure (P=0.0088), and local pancreatic complications (<0.0001). The TAPER-CDS-Tool is associated with changed clinician behavior and shortened hospital LOS, which has significant financial implications.

  10. Variability and correlations between characteristics in pumpkin varieties (Cucurbita maxima Duch. ex Lam.

    Directory of Open Access Journals (Sweden)

    Mladenović Emina

    2012-01-01

    Full Text Available Variability and correlations among morphological features of eight ornamental pumpkin varieties were studied under field conditions. The variability of plant height, fruit length, fruit width, fruith weight, fruit peel thickness, length and circumference of handle grip, leaf length, leaf width, seed length, seed width, seed thickness and number of fruits per plant in the examined material was high. The highest variability was related to the fruit properties. This variability represents a good source for future breeding programs. Correlations between the traits indicated a significant influence of leaf and seed characteristics on fruit properties. Multivariate statistical analysis provided differentiation of varieties on two phenotypically different groups.

  11. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  12. The Use of Color-Coded Genograms in Family Therapy.

    Science.gov (United States)

    Lewis, Karen Gail

    1989-01-01

    Describes a variable color-coding system which has been added to the standard family genogram in which characteristics or issues associated with a particular presenting problem or for a particular family are arbitrarily assigned a color. Presents advantages of color-coding, followed by clinical examples. (Author/ABL)

  13. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  14. Length of Variable Numbers of Tandem Repeats in the Carboxyl Ester Lipase (CEL) Gene May Confer Susceptibility to Alcoholic Liver Cirrhosis but Not Alcoholic Chronic Pancreatitis.

    Science.gov (United States)

    Fjeld, Karianne; Beer, Sebastian; Johnstone, Marianne; Zimmer, Constantin; Mössner, Joachim; Ruffert, Claudia; Krehan, Mario; Zapf, Christian; Njølstad, Pål Rasmus; Johansson, Stefan; Bugert, Peter; Miyajima, Fabio; Liloglou, Triantafillos; Brown, Laura J; Winn, Simon A; Davies, Kelly; Latawiec, Diane; Gunson, Bridget K; Criddle, David N; Pirmohamed, Munir; Grützmann, Robert; Michl, Patrick; Greenhalf, William; Molven, Anders; Sutton, Robert; Rosendahl, Jonas

    2016-01-01

    Carboxyl-ester lipase (CEL) contributes to fatty acid ethyl ester metabolism, which is implicated in alcoholic pancreatitis. The CEL gene harbours a variable number of tandem repeats (VNTR) region in exon 11. Variation in this VNTR has been linked to monogenic pancreatic disease, while conflicting results were reported for chronic pancreatitis (CP). Here, we aimed to investigate a potential association of CEL VNTR lengths with alcoholic CP. Overall, 395 alcoholic CP patients, 218 patients with alcoholic liver cirrhosis (ALC) serving as controls with a comparable amount of alcohol consumed, and 327 healthy controls from Germany and the United Kingdom (UK) were analysed by determination of fragment lengths by capillary electrophoresis. Allele frequencies and genotypes of different VNTR categories were compared between the groups. Twelve repeats were overrepresented in UK ACP patients (P = 0.04) compared to controls, whereas twelve repeats were enriched in German ALC compared to alcoholic CP patients (P = 0.03). Frequencies of CEL VNTR lengths of 14 and 15 repeats differed between German ALC patients and healthy controls (P = 0.03 and 0.008, respectively). However, in the genotype and pooled analysis of VNTR lengths no statistical significant association was depicted. Additionally, the 16-16 genotype as well as 16 repeats were more frequent in UK ALC than in alcoholic CP patients (P = 0.034 and 0.02, respectively). In all other calculations, including pooled German and UK data, allele frequencies and genotype distributions did not differ significantly between patients and controls or between alcoholic CP and ALC. We did not obtain evidence that CEL VNTR lengths are associated with alcoholic CP. However, our results suggest that CEL VNTR lengths might associate with ALC, a finding that needs to be clarified in larger cohorts.

  15. Fundamental length and relativistic length

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1988-01-01

    It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem

  16. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng; Alloum, Amira; Al-Naffouri, Tareq Y.

    2016-01-01

    present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first

  17. Factors related to axial length elongation and myopia progression in orthokeratology practice.

    Directory of Open Access Journals (Sweden)

    Bingjie Wang

    Full Text Available To investigate which baseline factors are predictive for axial length growth over an average period of 2.5 years in a group of children wearing orthokeratology (OK contact lenses.In this retrospective study, the clinical records of 249 new OK wearers between January 2012 and December 2013 from the contact lens clinic at the Eye and ENT Hospital of Fudan University were reviewed. The primary outcome measure was axial length change from baseline to the time of review (July-August 2015. Independent variables included baseline measures of age at initiation of OK wear, gender, refractive error (spherical equivalent, astigmatism, average keratometry, corneal toricity, central corneal thickness, white-to-white corneal diameter, pupil size, corneal topography eccentricity value (e-value, intraocular pressure (IOP and total time in follow-up (months total. The contributions of all independent variables on axial length change at the time of review were assessed using univariate and multivariable regression analyses.Univariate analyses of the right eyes of 249 OK patients showed that smaller increases in axial length were associated with older age at the onset of OK lens wear, greater baseline spherical equivalent myopic refractive error, less time in follow-up and a smaller e-value. Multivariable analyses of the significant right eye variables showed that the factors associated with smaller axial length growth were older age at the onset of OK lens wear (p<0.0001, greater baseline spherical equivalent myopic refractive error (p = 0.0046 and less time in follow-up (p<0.0001.The baseline factors demonstrating the greatest correlation with reduced axial length elongation during OK lens wear in myopic children included greater baseline spherical equivalent myopic refractive error and older age at the onset of OK lens wear.

  18. Ion-collecting sphere in a stationary, weakly magnetized plasma with finite shielding length

    International Nuclear Information System (INIS)

    Patacchini, Leonardo; Hutchinson, Ian H

    2007-01-01

    Collisionless ion collection by a negatively biased stationary spherical probe in a finite shielding length plasma is investigated using the Particle in Cell code SCEPTIC, in the presence of a weak magnetic field B. The overall effect of the magnetic field is to reduce the ion current, linearly in |B| for weak enough fields, with a slope steepness increasing with the electron Debye length. The angular current distribution and space-charge buildup strongly depend on the focusing properties of the probe, hence on its potential and the plasma shielding length. In particular, it is found that the concavity of the ion collection flux distribution can reverse sign when the electron Debye length is comparable to or larger than the probe radius (λ De ∼> r p ), provided the ion temperature is much lower than the probe bias (T i p )

  19. Neural code alterations and abnormal time patterns in Parkinson’s disease

    Science.gov (United States)

    Andres, Daniela Sabrina; Cerquetti, Daniel; Merello, Marcelo

    2015-04-01

    Objective. The neural code used by the basal ganglia is a current question in neuroscience, relevant for the understanding of the pathophysiology of Parkinson’s disease. While a rate code is known to participate in the communication between the basal ganglia and the motor thalamus/cortex, different lines of evidence have also favored the presence of complex time patterns in the discharge of the basal ganglia. To gain insight into the way the basal ganglia code information, we studied the activity of the globus pallidus pars interna (GPi), an output node of the circuit. Approach. We implemented the 6-hydroxydopamine model of Parkinsonism in Sprague-Dawley rats, and recorded the spontaneous discharge of single GPi neurons, in head-restrained conditions at full alertness. Analyzing the temporal structure function, we looked for characteristic scales in the neuronal discharge of the GPi. Main results. At a low-scale, we observed the presence of dynamic processes, which allow the transmission of time patterns. Conversely, at a middle-scale, stochastic processes force the use of a rate code. Regarding the time patterns transmitted, we measured the word length and found that it is increased in Parkinson’s disease. Furthermore, it showed a positive correlation with the frequency of discharge, indicating that an exacerbation of this abnormal time pattern length can be expected, as the dopamine depletion progresses. Significance. We conclude that a rate code and a time pattern code can co-exist in the basal ganglia at different temporal scales. However, their normal balance is progressively altered and replaced by pathological time patterns in Parkinson’s disease.

  20. Reliability issues and solutions for coding social communication performance in classroom settings.

    Science.gov (United States)

    Olswang, Lesley B; Svensson, Liselotte; Coggins, Truman E; Beilinson, Jill S; Donaldson, Amy L

    2006-10-01

    To explore the utility of time-interval analysis for documenting the reliability of coding social communication performance of children in classroom settings. Of particular interest was finding a method for determining whether independent observers could reliably judge both occurrence and duration of ongoing behavioral dimensions for describing social communication performance. Four coders participated in this study. They observed and independently coded 6 social communication behavioral dimensions using handheld computers. The dimensions were mutually exclusive and accounted for all verbal and nonverbal productions during a specified time frame. The technology allowed for coding frequency and duration for each entered code. Data were collected from 20 different 2-min video segments of children in kindergarten through 3rd-grade classrooms. Data were analyzed for interobserver and intraobserver agreements using time-interval sorting and Cohen's kappa. Further, interval size and total observation length were manipulated to determine their influence on reliability. The data revealed interval sorting and kappa to be a suitable method for examining reliability of occurrence and duration of ongoing social communication behavioral dimensions. Nearly all comparisons yielded medium to large kappa values; interval size and length of observation minimally affected results. Implications The analysis procedure described in this research solves a challenge in reliability: comparing coding by independent observers of both occurrence and duration of behaviors. Results indicate the utility of a new coding taxonomy and technology for application in online observations of social communication in a classroom setting.

  1. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  2. High fidelity analysis of BWR fuel assembly with COBRA-TF/PARCS and trace codes

    International Nuclear Information System (INIS)

    Abarca, A.; Miro, R.; Barrachina, T.; Verdu, G.; Soler, A.

    2013-01-01

    The growing importance of detailed reactor core and fuel assembly description for light water reactors (LWRs) as well as the sub-channel safety analysis requires high fidelity models and coupled neutronic/thermalhydraulic codes. Hand in hand with advances in the computer technology, the nuclear safety analysis is beginning to use a more detailed thermal hydraulics and neutronics. Previously, a PWR core and a 16 by 16 fuel assembly models were developed to test and validate our COBRA-TF/PARCS v2.7 (CTF/PARCS) coupled code. In this work, a comparison of the modeling and simulation advantages and disadvantages of modern 10 by 10 BWR fuel assembly with CTF/PARCS and TRACE codes has been done. The objective of the comparison is making known the main advantages of using the sub-channel codes to perform high resolution nuclear safety analysis. The sub-channel codes, like CTF, permits obtain accurate predictions, in two flow regime, of the thermalhydraulic parameters important to safety with high local resolution. The modeled BWR fuel assembly has 91 fuel rods (81 full length and 10 partial length fuel rods) and a big square central water rod. This assembly has been modeled with high level of detail with CTF code and using the BWR modeling parameters provided by TRACE. The same neutronic PARCS's model has been used for the simulation with both codes. To compare the codes a coupled steady state has be performed. (author)

  3. Ion collection by a sphere in a flowing plasma: 2. non-zero Debye length

    International Nuclear Information System (INIS)

    Hutchinson, I H

    2003-01-01

    The spatial distribution of ion flux to a sphere in a flowing collisionless plasma is calculated using a particle-in-cell code SCEPTIC. The code is validated by comparing with prior stationary-plasma and approximate calculations. Comprehensive results are provided for ion temperatures 1 and 0.1 times the electron temperature, and for Debye length from 0.01 to 100 times the probe size. A remarkable qualitatively new result is obtained: over a range of Debye lengths from roughly 0.1 to 10 times the probe radius at T i = 0.1T e , the downstream side of the probe receives substantially higher flux density than the upstream side when the flow is subsonic. This unexpected reversal of the asymmetry reinforces the need for these fully self-consistent calculations, but renders the use of the flux ratio for Mach-probe purposes problematic, even for deriving the direction of the flow

  4. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2016-05-01

    subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE

  5. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...

  6. Lattice-Like Total Perfect Codes

    Directory of Open Access Journals (Sweden)

    Araujo Carlos

    2014-02-01

    Full Text Available A contribution is made to the classification of lattice-like total perfect codes in integer lattices Λn via pairs (G, Φ formed by abelian groups G and homomorphisms Φ: Zn → G. A conjecture is posed that the cited contribution covers all possible cases. A related conjecture on the unfinished work on open problems on lattice-like perfect dominating sets in Λn with induced components that are parallel paths of length > 1 is posed as well.

  7. Sensitivity analysis of FRAPCON-1 computer code to some parameters

    International Nuclear Information System (INIS)

    Chia, C.T.; Silva, C.F. da.

    1987-05-01

    A sensibility study of the code FRAPCON-1 was done for the following inout data: number of axial nodes, number of time steps and the axial power shape. Their influence in the code response concerning to the fuel center line temperature, stored energy, internal gas pressure, clad hoop strain and gap width were analyzed. The number of axial nodes has little influence, but care must be taken in the choice of the power axial profile and the time step length. (Author) [pt

  8. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    Science.gov (United States)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  9. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  10. RADTRAN II: revised computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1982-10-01

    A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables

  11. Short initial length quench on CICC of ITER TF coils

    Energy Technology Data Exchange (ETDEWEB)

    Nicollet, S.; Ciazynski, D.; Duchateau, J.-L.; Lacroix, B. [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Bessette, D.; Rodriguez-Mateos, F. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Coatanea-Gouachet, M. [ELC Engineering, 350 chemin du Verladet, F-13290 Les Milles (France); Gauthier, F. [Soditech Ingenierie, 4 bis allée des Gabians, ZI La Frayère, 06150 Cannes (France)

    2014-01-29

    Previous quench studies performed for the International Thermonuclear Experimental Reactor (ITER) Toroidal Field (TF) Coils have led to identify two extreme families of quench: first 'severe' quenches over long initial lengths in high magnetic field, and second smooth quenches over short initial lengths in low field region. Detailed analyses and results on smooth quench propagation and detectability on one TF Cable In Conduit Conductor (CICC) with a lower propagation velocity are presented here. The influence of the initial quench energy is shown and results of computations with either a Fast Discharge (FD) of the magnet or without (failure of the voltage quench detection system) are reported. The influence of the central spiral of the conductor on the propagation velocity is also detailed. In the cases of a regularly triggered FD, the hot spot temperature criterion of 150 K (with helium and jacket) is fulfilled for an initial quench length of 1 m, whereas this criterion is exceed (Tmax ≈ 200 K) for an extremely short length of 5 cm. These analyses were carried out using both the Supermagnet(trade mark, serif) and Venecia codes and the comparisons of the results are also discussed.

  12. Short initial length quench on CICC of ITER TF coils

    International Nuclear Information System (INIS)

    Nicollet, S.; Ciazynski, D.; Duchateau, J.-L.; Lacroix, B.; Bessette, D.; Rodriguez-Mateos, F.; Coatanea-Gouachet, M.; Gauthier, F.

    2014-01-01

    Previous quench studies performed for the International Thermonuclear Experimental Reactor (ITER) Toroidal Field (TF) Coils have led to identify two extreme families of quench: first 'severe' quenches over long initial lengths in high magnetic field, and second smooth quenches over short initial lengths in low field region. Detailed analyses and results on smooth quench propagation and detectability on one TF Cable In Conduit Conductor (CICC) with a lower propagation velocity are presented here. The influence of the initial quench energy is shown and results of computations with either a Fast Discharge (FD) of the magnet or without (failure of the voltage quench detection system) are reported. The influence of the central spiral of the conductor on the propagation velocity is also detailed. In the cases of a regularly triggered FD, the hot spot temperature criterion of 150 K (with helium and jacket) is fulfilled for an initial quench length of 1 m, whereas this criterion is exceed (Tmax ≈ 200 K) for an extremely short length of 5 cm. These analyses were carried out using both the Supermagnet(trade mark, serif) and Venecia codes and the comparisons of the results are also discussed

  13. Transient Variable Caching in Java’s Stack-Based Intermediate Representation

    Directory of Open Access Journals (Sweden)

    Paul Týma

    1999-01-01

    Full Text Available Java’s stack‐based intermediate representation (IR is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.. On register based machines, often transient variables are cached within registers (when available saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC, a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.

  14. Design of deterministic interleaver for turbo codes

    International Nuclear Information System (INIS)

    Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.

    2008-01-01

    The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)

  15. Minimizing coupling loss by selection of twist pitch lengths in multi-stage cable-in-conduit conductors

    International Nuclear Information System (INIS)

    Rolando, G; Nijhuis, A; Devred, A

    2014-01-01

    The numerical code JackPot-ACDC (van Lanen et al 2010 Cryogenics 50 139–48, van Lanen et al 2011 IEEE Trans. Appl. Supercond. 21 1926–9, van Lanen et al 2012 Supercond. Sci. Technol. 25 025012) allows fast parametric studies of the electro-magnetic performance of cable-in-conduit conductors (CICCs). In this paper the code is applied to the analysis of the relation between twist pitch length sequence and coupling loss in multi-stage ITER-type CICCs. The code shows that in the analysed conductors the coupling loss is at its minimum when the twist pitches of the successive cabling stages have a length ratio close to one. It is also predicted that by careful selection of the stage-to-stage twist pitch ratio, CICCs cabled according to long twist schemes in the initial stages can achieve lower coupling loss than conductors with shorter pitches. The result is validated by AC loss measurements performed on prototype conductors for the ITER Central Solenoid featuring different twist pitch sequences. (paper)

  16. Legal Nature of Criminal Proceedings Regarding the Length of the Appeal

    Directory of Open Access Journals (Sweden)

    Constantin Tanase

    2016-05-01

    Full Text Available The appeal regarding length of criminal proceedings represents a new institution of Romanian criminal procedure system, born from the need to align the procedural rules to the constitutional requirements and other internal rules, but especially from the need for harmonization with European Community rules, namely the Convention for the Protection of Human Rights and Fundamental Freedoms. To the same extent, it was aimed at forming a legal institution in line with the jurisprudence of the European Court of Human Rights. The new institution has its registered matter in art. 4881-4886 Criminal Procedure Code., Introduced by Law implementing the Code under Title IV – “Special Procedures” which recommends it from the beginning as a derogation from the common procedure. Nevertheless, given the position of remedy for excessive and unjustified extension of the criminal proceedings, as well as the judicial review, which it triggers in this regard, it raises the question of the legal nature of the appeal regarding the length of criminal proceedings. The answer to this question may affect the correct application of the institution and the improvement of judicial practice.

  17. CFD analysis of blockage length on a partially blocked fuel rod

    International Nuclear Information System (INIS)

    Scuro, Nikolas Lymberis; Andrade, Delvonei Alves de; Angelo, Gabriel; Angelo, Edvaldo

    2017-01-01

    In LOCA accidents, fuel rods may balloon by the increasing of pressure difference between fuel rod and core vessel. With the balloon effect, the swelling can partially block the flow channel, affecting the coolability during reflood phase. In order to analyze the influence of blockage length after LOCA events, many numerical simulations using Ansys-CFX code have been done in steady state condition, characterizing the final phase of reflood. Peaks of temperature are observed in the middle of the fuel rod, followed by a temperature drop. This effect is justified by the increasing of heat transfer coefficient, originated from the high turbulence effects. Therefore, this paper considers a radial blockage of 90%, varying just the blockage length. This study observed that, for the same boundary conditions, the longer the blockage length originated after LOCA events, the higher are the central temperatures in the fuel rod. (author)

  18. CFD analysis of blockage length on a partially blocked fuel rod

    Energy Technology Data Exchange (ETDEWEB)

    Scuro, Nikolas Lymberis; Andrade, Delvonei Alves de [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Engenharia Nuclear; Angelo, Gabriel [Centro Universitário FEI (UNIFEI), São Paulo, SP (Brazil). Dept. de Engenharia Mecânica; Angelo, Edvaldo, E-mail: nikolas.scuro@gmail.com, E-mail: delvonei@ipen.br, E-mail: gangelo@fei.edu.br, E-mail: eangelo@mackenzie.br [Universidade Presbiteriana Mackenzie, São Paulo, SP (Brazil). Escola da Engenharia. Grupo de Simulação Numérica

    2017-07-01

    In LOCA accidents, fuel rods may balloon by the increasing of pressure difference between fuel rod and core vessel. With the balloon effect, the swelling can partially block the flow channel, affecting the coolability during reflood phase. In order to analyze the influence of blockage length after LOCA events, many numerical simulations using Ansys-CFX code have been done in steady state condition, characterizing the final phase of reflood. Peaks of temperature are observed in the middle of the fuel rod, followed by a temperature drop. This effect is justified by the increasing of heat transfer coefficient, originated from the high turbulence effects. Therefore, this paper considers a radial blockage of 90%, varying just the blockage length. This study observed that, for the same boundary conditions, the longer the blockage length originated after LOCA events, the higher are the central temperatures in the fuel rod. (author)

  19. The PLTEMP V2.1 code

    International Nuclear Information System (INIS)

    Olson, A.P.

    2003-01-01

    Recent improvements to the computer code PLTEMP/ANL V2.1 are described. A new iterative, error-minimization solution technique is used to obtain the thermal distribution both within each fuel plate, and along the axial length of each coolant channel. A new, radial geometry solution is available for tube-type fuel assemblies. Software comparisons of these and other new models are described. Applications to Russian-designed IRT-type research reactors are described. (author)

  20. MABEL 2: a code to analyse cladding deformation in a loss of coolant accident

    International Nuclear Information System (INIS)

    Bowring, R.W.; Cooper, C.A.; Nye, M.T.S.

    1983-06-01

    The calculation strategy of MABEL-2 and the hierarchy and purpose of its subroutines are described so that a programmer can readily identify both the overall structure of the code and the functions of its constituent parts. Also, to assist those who wish to examine the coding in detail, the common block variables are defined and a list is given of all variables used in the code, together with the subroutines in which they are used. (author)

  1. BIRTH: a beam deposition code for non-circular tokamak plasmas

    International Nuclear Information System (INIS)

    Otsuka, Michio; Nagami, Masayuki; Matsuda, Toshiaki

    1982-09-01

    A new beam deposition code has been developed which is capable of calculating fast ion deposition profiles including the orbit correction. The code incorporates any injection geometry and a non-circular cross section plasma with a variable elongation and an outward shift of the magnetic flux surface. Typical cpu time on a DEC-10 computer is 10 - 20 seconds and 5 - 10 seconds with and without the orbit correction, respectively. This is shorter by an order of magnitude than that of other codes, e.g., Monte Carlo codes. The power deposition profile calculated by this code is in good agreement with that calculated by a Monte Carlo code. (author)

  2. Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels

    Directory of Open Access Journals (Sweden)

    IMTAWIL, V.

    2011-11-01

    Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.

  3. Telomere length in normal and neoplastic canine tissues.

    Science.gov (United States)

    Cadile, Casey D; Kitchell, Barbara E; Newman, Rebecca G; Biller, Barbara J; Hetler, Elizabeth R

    2007-12-01

    To determine the mean telomere restriction fragment (TRF) length in normal and neoplastic canine tissues. 57 solid-tissue tumor specimens collected from client-owned dogs, 40 samples of normal tissue collected from 12 clinically normal dogs, and blood samples collected from 4 healthy blood donor dogs. Tumor specimens were collected from client-owned dogs during diagnostic or therapeutic procedures at the University of Illinois Veterinary Medical Teaching Hospital, whereas 40 normal tissue samples were collected from 12 control dogs. Telomere restriction fragment length was determined by use of an assay kit. A histologic diagnosis was provided for each tumor by personnel at the Veterinary Diagnostic Laboratory at the University of Illinois. Mean of the mean TRF length for 44 normal samples was 19.0 kilobases (kb; range, 15.4 to 21.4 kb), and the mean of the mean TRF length for 57 malignant tumors was 19.0 kb (range, 12.9 to 23.5 kb). Although the mean of the mean TRF length for tumors and normal tissues was identical, tumor samples had more variability in TRF length. Telomerase, which represents the main mechanism by which cancer cells achieve immortality, is an attractive therapeutic target. The ability to measure telomere length is crucial to monitoring the efficacy of telomerase inhibition. In contrast to many other mammalian species, the length of canine telomeres and the rate of telomeric DNA loss are similar to those reported in humans, making dogs a compelling choice for use in the study of human anti-telomerase strategies.

  4. The expression of the skeletal muscle force-length relationship in vivo: a simulation study.

    Science.gov (United States)

    Winter, Samantha L; Challis, John H

    2010-02-21

    The force-length relationship is one of the most important mechanical characteristics of skeletal muscle in humans and animals. For a physiologically realistic joint range of motion and therefore range of muscle fibre lengths only part of the force-length curve may be used in vivo, i.e. only a section of the force-length curve is expressed. A generalised model of a mono-articular muscle-tendon complex was used to examine the effect of various muscle architecture parameters on the expressed section of the force-length relationship for a 90 degrees joint range of motion. The parameters investigated were: the ratio of tendon resting length to muscle fibre optimum length (L(TR):L(F.OPT)) (varied from 0.5 to 11.5), the ratio of muscle fibre optimum length to average moment arm (L(F.OPT):r) (varied from 0.5 to 5), the normalised tendon strain at maximum isometric force (c) (varied from 0 to 0.08), the muscle fibre pennation angle (theta) (varied from 0 degrees to 45 degrees) and the joint angle at which the optimum muscle fibre length occurred (phi). The range of values chosen for each parameter was based on values reported in the literature for five human mono-articular muscles with different functional roles. The ratios L(TR):L(F.OPT) and L(F.OPT):r were important in determining the amount of variability in the expressed section of the force-length relationship. The modelled muscle operated over only one limb at intermediate values of these two ratios (L(TR):L(F.OPT)=5; L(F.OPT):r=3), whether this was the ascending or descending limb was determined by the precise values of the other parameters. It was concluded that inter-individual variability in the expressed section of the force-length relationship is possible, particularly for muscles with intermediate values of L(TR):L(F.OPT) and L(F.OPT):r such as the brachialis and vastus lateralis. Understanding the potential for inter-individual variability in the expressed section is important when using muscle models to

  5. Otolith Length-Fish Length Relationships of Eleven US Arctic Fish Species and Their Application to Ice Seal Diet Studies

    Science.gov (United States)

    Walker, K. L.; Norcross, B.

    2016-02-01

    The Arctic ecosystem has moved into the spotlight of scientific research in recent years due to increased climate change and oil and gas exploration. Arctic fishes and Arctic marine mammals represent key parts of this ecosystem, with fish being a common part of ice seal diets in the Arctic. Determining sizes of fish consumed by ice seals is difficult because otoliths are often the only part left of the fish after digestion. Otolith length is known to be positively related to fish length. By developing species-specific otolith-body morphometric relationships for Arctic marine fishes, fish length can be determined for fish prey found in seal stomachs. Fish were collected during ice free months in the Beaufort and Chukchi seas 2009 - 2014, and the most prevalent species captured were chosen for analysis. Otoliths from eleven fish species from seven families were measured. All species had strong linear relationships between otolith length and fish total length. Nine species had coefficient of determination values over 0.75, indicating that most of the variability in the otolith to fish length relationship was explained by the linear regression. These relationships will be applied to otoliths found in stomachs of three species of ice seals (spotted Phoca largha, ringed Pusa hispida, and bearded Erignathus barbatus) and used to estimate fish total length at time of consumption. Fish lengths can in turn be used to calculate fish weight, enabling further investigation into ice seal energetic demands. This application will aid in understanding how ice seals interact with fish communities in the US Arctic and directly contribute to diet comparisons among and within ice seal species. A better understanding of predator-prey interactions in the US Arctic will aid in predicting how ice seal and fish species will adapt to a changing Arctic.

  6. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  7. Row Reduction Applied to Decoding of Rank Metric and Subspace Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Nielsen, Johan Sebastian Rosenkilde; Li, Wenhui

    2017-01-01

    We show that decoding of ℓ-Interleaved Gabidulin codes, as well as list-ℓ decoding of Mahdavifar–Vardy (MV) codes can be performed by row reducing skew polynomial matrices. Inspired by row reduction of F[x] matrices, we develop a general and flexible approach of transforming matrices over skew...... polynomial rings into a certain reduced form. We apply this to solve generalised shift register problems over skew polynomial rings which occur in decoding ℓ-Interleaved Gabidulin codes. We obtain an algorithm with complexity O(ℓμ2) where μ measures the size of the input problem and is proportional...... to the code length n in the case of decoding. Further, we show how to perform the interpolation step of list-ℓ-decoding MV codes in complexity O(ℓn2), where n is the number of interpolation constraints....

  8. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  9. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng

    2016-12-24

    In computer storage, RAID 6 is a level of RAID that can tolerate two failed drives. When RAID-6 is implemented by Reed-Solomon (RS) codes, the penalty of the writing performance is on the field multiplications in the second parity. In this paper, we present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first parity is also utilized to calculate the second parity. To the best of our knowledge, this is the first approach supporting the RAID-6 RS codes to approach the optimal arithmetic complexity.

  10. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  11. An Integration of the Restructured Melcor for the Midas Computer Code

    International Nuclear Information System (INIS)

    Sunhee Park; Dong Ha Kim; Ko-Ryu Kim; Song-Won Cho

    2006-01-01

    The developmental need for a localized severe accident analysis code is on the rise. KAERI is developing a severe accident code called MIDAS, which is based on MELCOR. In order to develop the localized code (MIDAS) which simulates a severe accident in a nuclear power plant, the existing data structure is reconstructed for all the packages in MELCOR, which uses pointer variables for data transfer between the packages. During this process, new features in FORTRAN90 such as a dynamic allocation are used for an improved data saving and transferring method. Hence the readability, maintainability and portability of the MIDAS code have been enhanced. After the package-wise restructuring, the newly converted packages are integrated together. Depending on the data usage in the package, two types of packages can be defined: some use their own data within the package (let's call them independent packages) and the others share their data with other packages (dependent packages). For the independent packages, the integration process is simple to link the already converted packages together. That is, the package-wise structuring does not require further conversion of variables for the integration process. For the dependent packages, extra conversion is necessary to link them together. As the package-wise restructuring converts only the corresponding package's variables, other variables defined from other packages are not touched and remain as it is. These variables are to be converted into the new types of variables simultaneously as well as the main variables in the corresponding package. Then these dependent packages are ready for integration. In order to check whether the integration process is working well, the results from the integrated version are verified against the package-wise restructured results. Steady state runs and station blackout sequences are tested and the major variables are found to be the same each other. In order to verify the results, the integrated

  12. Variability through the Eyes of the Programmer

    DEFF Research Database (Denmark)

    Melo, Jean; Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2017-01-01

    Preprocessor directives (#ifdefs) are often used to implement compile-time variability, despite the critique that they increase complexity, hamper maintainability, and impair code comprehensibility. Previous studies have shown that the time of bug finding increases linearly with variability. Howe...

  13. Association of day length and weather conditions with physical activity levels in older community dwelling people.

    Directory of Open Access Journals (Sweden)

    Miles D Witham

    Full Text Available Weather is a potentially important determinant of physical activity. Little work has been done examining the relationship between weather and physical activity, and potential modifiers of any relationship in older people. We therefore examined the relationship between weather and physical activity in a cohort of older community-dwelling people.We analysed prospectively collected cross-sectional activity data from community-dwelling people aged 65 and over in the Physical Activity Cohort Scotland. We correlated seven day triaxial accelerometry data with daily weather data (temperature, day length, sunshine, snow, rain, and a series of potential effect modifiers were tested in mixed models: environmental variables (urban vs rural dwelling, percentage of green space, psychological variables (anxiety, depression, perceived behavioural control, social variables (number of close contacts and health status measured using the SF-36 questionnaire.547 participants, mean age 78.5 years, were included in this analysis. Higher minimum daily temperature and longer day length were associated with higher activity levels; these associations remained robust to adjustment for other significant associates of activity: age, perceived behavioural control, number of social contacts and physical function. Of the potential effect modifier variables, only urban vs rural dwelling and the SF-36 measure of social functioning enhanced the association between day length and activity; no variable modified the association between minimum temperature and activity.In older community dwelling people, minimum temperature and day length were associated with objectively measured activity. There was little evidence for moderation of these associations through potentially modifiable health, environmental, social or psychological variables.

  14. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W. L.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries

  15. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W.L.; Leopando, L.S.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries. (author)

  16. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  17. The influence of spring length on the physical parameters of simple harmonic motion

    International Nuclear Information System (INIS)

    Triana, C A; Fajardo, F

    2012-01-01

    The aim of this work is to analyse the influence of spring length on the simple harmonic motion of a spring-mass system. In particular, we study the effect of changing the spring length on the elastic constant k, the angular frequency ω and the damping factor γ of the oscillations. To characterize the behaviour of these variables we worked with a series of springs of seven different lengths, in which the elastic constant was found by means of the spring-elongation measurement and ω was obtained from the measurement of the oscillation period T of a suspended mass. The oscillatory movement was recorded using a force sensor and the γ value was determined by the fit of the envelope oscillations. Graphical analysis of the results shows that k, ω and γ decrease when the natural spring length increases. This experiment can be performed with equipment normally found in undergraduate physics laboratories. In addition, through graphical analysis students can deduce some relationships between variables that determine the simple harmonic motion behaviour. (paper)

  18. Joint Coding/Decoding for Multi-message HARQ

    OpenAIRE

    Benyouss , Abdellatif; Jabi , Mohammed; Le Treust , Maël; Szczecinski , Leszek

    2016-01-01

    International audience; In this work, we propose and investigate a new coding strategy devised to increase the throughput of hybrid ARQ (HARQ) transmission over block fading channel. In our proposition, the transmitter jointly encodes a variable number of bits for each round of HARQ. The parameters (rates) of this joint coding can vary and may be based on the negative acknowledgment (NACK) signals provided by the receiver or, on the past (outdated) information about the channel states. The re...

  19. UEP Concepts in Modulation and Coding

    Directory of Open Access Journals (Sweden)

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  20. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  1. Investigation of the somaclonal and mutagen induced variability in barley by the application of protein and DNA markers

    International Nuclear Information System (INIS)

    Atanassov, A.; Todorovska, E.; Trifonova, A.; Petrova, M.; Marinova, E.; Gramatikova, M.; Valcheva, D.; Zaprianov, S.; Mersinkov, N.

    1998-01-01

    Barley, Hordeum vulgare L., is one of the most important crop species for Bulgaria. The characterisation of the genetic pool is of great necessity for the Bulgarian barley breeding programme which is directed toward improving quantitative and qualitative traits. Molecular markers [protein, restriction fragment length polymorphisms (RFLP) and randomly amplified polymorphic DNA (RAPD)] have been applied to characterise the Bulgarian barley cultivars and their regenerants. The changes in DNA loci coding for 26S, 5.8S and 18S rRNA repeats, C hordein locus and mitochondrial DNA organisation have been investigated. The potential for ribosomal DNA length polymorphism in Bulgarian barley cultivars appear to be limited to three different repeat lengths (10.2, 9.5 and 9.0kb) and three plant rDNA phenotypes. Polymorphism was not observed in ribosomal DNA repeat units in somaclonal variants. Variation concerning C hordein electrophoretic pattern was observed in one line from cultivar Jubiley. Analysis of the HorI locus reveals RFLPs in sequences coding for C hordeins in this line. Mitochondrial molecular markers are convenient for detection of DNA polymorphisms in the variant germplasm as well as for the somaclonal variants derived from it. Two lines from Ruen revealed polymorphic bands after hybridisation with mitochondrial DNA probe. RAPD assays have been carried out by using 20 different 10-mer primers. Heritable polymorphism in several tissue culture derived (TCD) lines was observed. RAPD assay is a sensitive and representative approach to distinguish the variability created by tissue culture and mutagenesis

  2. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  3. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  4. Modeling Wood Fibre Length in Black Spruce (Picea mariana (Mill. BSP Based on Ecological Land Classification

    Directory of Open Access Journals (Sweden)

    Elisha Townshend

    2015-09-01

    Full Text Available Effective planning to optimize the forest value chain requires accurate and detailed information about the resource; however, estimates of the distribution of fibre properties on the landscape are largely unavailable prior to harvest. Our objective was to fit a model of the tree-level average fibre length related to ecosite classification and other forest inventory variables depicted at the landscape scale. A series of black spruce increment cores were collected at breast height from trees in nine different ecosite groups within the boreal forest of northeastern Ontario, and processed using standard techniques for maceration and fibre length measurement. Regression tree analysis and random forests were used to fit hierarchical classification models and find the most important predictor variables for the response variable area-weighted mean stem-level fibre length. Ecosite group was the best predictor in the regression tree. Longer mean fibre-length was associated with more productive ecosites that supported faster growth. The explanatory power of the model of fitted data was good; however, random forests simulations indicated poor generalizability. These results suggest the potential to develop localized models linking wood fibre length in black spruce to landscape-level attributes, and improve the sustainability of forest management by identifying ideal locations to harvest wood that has desirable fibre characteristics.

  5. Structural code benchmarking for the analysis of impact response of nuclear material shipping casks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1984-01-01

    The Transportation Technology Center at Sandia National Laboratories has initiated a program to benchmark thermal and structural codes that are available to the nuclear material transportation community. The program consists of the following five phrases: (1) code inventory and review, (2) development of a cask-like set of problems, (3) multiple independent numerical analyses of the problems, (4) transfer of information, and (5) performance of experiments to obtain data for comparison with the numerical analyses. This paper will summarize the results obtained by the independent numerical analyses. The analyses indicate the variability that can be expected both due to differences in user-controlled parameters and from code-to-code differences. The results show that in purely elastic analyses, differences can be attributed to user controlled parameters. Model problems involving elastic/plastic material behavior and large deformations, however, have greater variability with significant differences reported for implicit and explicit integration schemes in finite element programs. This variability demonstrates the need to obtain experimental data to properly benchmark codes utilizing elastic/plastic material models and large deformation capability

  6. Extensions of the 3-dimensional plasma transport code E3D

    International Nuclear Information System (INIS)

    Runov, A.; Schneider, R.; Kasilov, S.; Reiter, D.

    2004-01-01

    One important aspect of modern fusion research is plasma edge physics. Fluid transport codes extending beyond the standard 2-D code packages like B2-Eirene or UEDGE are under development. A 3-dimensional plasma fluid code, E3D, based upon the Multiple Coordinate System Approach and a Monte Carlo integration procedure has been developed for general magnetic configurations including ergodic regions. These local magnetic coordinates lead to a full metric tensor which accurately accounts for all transport terms in the equations. Here, we discuss new computational aspects of the realization of the algorithm. The main limitation to the Monte Carlo code efficiency comes from the restriction on the parallel jump of advancing test particles which must be small compared to the gradient length of the diffusion coefficient. In our problems, the parallel diffusion coefficient depends on both plasma and magnetic field parameters. Usually, the second dependence is much more critical. In order to allow long parallel jumps, this dependence can be eliminated in two steps: first, the longitudinal coordinate x 3 of local magnetic coordinates is modified in such a way that in the new coordinate system the metric determinant and contra-variant components of the magnetic field scale along the magnetic field with powers of the magnetic field module (like in Boozer flux coordinates). Second, specific weights of the test particles are introduced. As a result of increased parallel jump length, the efficiency of the code is about two orders of magnitude better. (copyright 2004 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  7. Skew cyclic codes over F_q+uF_q+vF_q+uvF_q

    Directory of Open Access Journals (Sweden)

    Ting Yao

    2015-09-01

    Full Text Available In this paper, we study skew cyclic codes over the ring $R=F_q+uF_q+vF_q+uvF_q$, where $u^{2}=u,v^{2}=v,uv=vu$, $q=p^{m}$ and $p$ is an odd prime. We investigate the structural properties of skew cyclic codes over $R$ through a decomposition theorem. Furthermore, we give a formula for the number of skew cyclic codes of length $n$ over $R.$

  8. Cross-sectional study on the weight and length of infants in the interior of the state of São Paulo, Brazil: associations with sociodemographic variables and breastfeeding.

    Science.gov (United States)

    Bernardi, Julia Laura Delbue; Jordão, Regina Esteves; Barros Filho, Antônio de Azevedo

    2009-07-01

    Increasing obesity is starting to occur among Brazilians. The aim of this study was to investigate the weight and length of children under two years of age in relation to sociodemographic variables and according to whether they were breastfed. Cross-sectional randomized study conducted in 2004-2005, based on the declaration of live births (SINASC) in Campinas, Brazil. 2,857 mothers of newborns were interviewed and answered a questionnaire seeking socioeconomic and breastfeeding information. The newborns' weights and lengths were measured at the end of the interviews and the body mass index was calculated. Percentiles ( 85) and Z-scores ( +1) were used for classification based on the new growth charts recommended by WHO (2006). The log-rank test, multiple linear regression and binomial test (Z) were used. The statistical significance level used was 5%. The predominant social level was class C. The median for exclusive breastfeeding was 90 days; 61.25% of the children were between P15 and P85 for body mass index and 61.12% for length, respectively. Children whose mothers studied for nine to eleven years and children whose mothers were unemployed presented lower weight. Children whose mothers worked in health-related professions presented lower length when correlated with breastfeeding. The breastfeeding, maternal schooling and maternal occupation levels had an influence on nutrition status and indicated that obesity is occurring in early childhood among the infants living in the municipality.

  9. Cross-sectional study on the weight and length of infants in the interior of the State of São Paulo, Brazil: associations with sociodemographic variables and breastfeeding

    Directory of Open Access Journals (Sweden)

    Julia Laura Delbue Bernardi

    Full Text Available CONTEXT AND OBJECTIVE: Increasing obesity is starting to occur among Brazilians. The aim of this study was to investigate the weight and length of children under two years of age in relation to sociodemographic variables and according to whether they were breastfed. DESIGN AND SETTING: Cross-sectional randomized study conducted in 2004-2005, based on the declaration of live births (SINASC in Campinas, Brazil. METHODS: 2,857 mothers of newborns were interviewed and answered a questionnaire seeking socioeconomic and breastfeeding information. The newborns' weights and lengths were measured at the end of the interviews and the body mass index was calculated. Percentiles ( 85 and Z-scores ( +1 were used for classification based on the new growth charts recommended by WHO (2006. The log-rank test, multiple linear regression and binomial test (Z were used. The statistical significance level used was 5%. RESULTS: The predominant social level was class C. The median for exclusive breastfeeding was 90 days; 61.25% of the children were between P15 and P85 for body mass index and 61.12% for length, respectively. Children whose mothers studied for nine to eleven years and children whose mothers were unemployed presented lower weight. Children whose mothers worked in health-related professions presented lower length when correlated with breastfeeding. CONCLUSION: The breastfeeding, maternal schooling and maternal occupation levels had an influence on nutrition status and indicated that obesity is occurring in early childhood among the infants living in the municipality.

  10. Controlled dense coding for continuous variables using three-particle entangled states

    CERN Document Server

    Jing Zhang; Kun Chi Peng; 10.1103/PhysRevA.66.032318

    2002-01-01

    A simple scheme to realize quantum controlled dense coding with a bright tripartite entangled state light generated from nondegenerate optical parametric amplifiers is proposed in this paper. The quantum channel between Alice and Bob is controlled by Claire. As a local oscillator and balanced homodyne detector are not needed, the proposed protocol is easy to be realized experimentally. (15 refs)

  11. Effect of altering starting length and activation timing of muscle on fiber strain and muscle damage.

    Science.gov (United States)

    Butterfield, Timothy A; Herzog, Walter

    2006-05-01

    Muscle strain injuries are some of the most frequent injuries in sports and command a great deal of attention in an effort to understand their etiology. These injuries may be the culmination of a series of subcellular events accumulated through repetitive lengthening (eccentric) contractions during exercise, and they may be influenced by a variety of variables including fiber strain magnitude, peak joint torque, and starting muscle length. To assess the influence of these variables on muscle injury magnitude in vivo, we measured fiber dynamics and joint torque production during repeated stretch-shortening cycles in the rabbit tibialis anterior muscle, at short and long muscle lengths, while varying the timing of activation before muscle stretch. We found that a muscle subjected to repeated stretch-shortening cycles of constant muscle-tendon unit excursion exhibits significantly different joint torque and fiber strains when the timing of activation or starting muscle length is changed. In particular, measures of fiber strain and muscle injury were significantly increased by altering activation timing and increasing the starting length of the muscle. However, we observed differential effects on peak joint torque during the cyclic stretch-shortening exercise, as increasing the starting length of the muscle did not increase torque production. We conclude that altering activation timing and muscle length before stretch may influence muscle injury by significantly increasing fiber strain magnitude and that fiber dynamics is a more important variable than muscle-tendon unit dynamics and torque production in influencing the magnitude of muscle injury.

  12. Cytomegalovirus sequence variability, amplicon length, and DNase-sensitive non-encapsidated genomes are obstacles to standardization and commutability of plasma viral load results.

    Science.gov (United States)

    Naegele, Klaudia; Lautenschlager, Irmeli; Gosert, Rainer; Loginov, Raisa; Bir, Katia; Helanterä, Ilkka; Schaub, Stefan; Khanna, Nina; Hirsch, Hans H

    2018-04-22

    Cytomegalovirus (CMV) management post-transplantation relies on quantification in blood, but inter-laboratory and inter-assay variability impairs commutability. An international multicenter study demonstrated that variability is mitigated by standardizing plasma volumes, automating DNA extraction and amplification, and calibration to the 1st-CMV-WHO-International-Standard as in the FDA-approved Roche-CAP/CTM-CMV. However, Roche-CAP/CTM-CMV showed under-quantification and false-negative results in a quality assurance program (UK-NEQAS-2014). To evaluate factors contributing to quantification variability of CMV viral load and to develop optimized CMV-UL54-QNAT. The UL54 target of the UK-NEQAS-2014 variant was sequenced and compared to 329 available CMV GenBank sequences. Four Basel-CMV-UL54-QNAT assays of 361 bp, 254 bp, 151 bp, and 95 bp amplicons were developed that only differed in reverse primer positions. The assays were validated using plasmid dilutions, UK-NEQAS-2014 sample, as well as 107 frozen and 69 prospectively collected plasma samples from transplant patients submitted for CMV QNAT, with and without DNase-digestion prior to nucleic acid extraction. Eight of 43 mutations were identified as relevant in the UK-NEQAS-2014 target. All Basel-CMV-UL54 QNATs quantified the UK-NEQAS-2014 but revealed 10-fold increasing CMV loads as amplicon size decreased. The inverse correlation of amplicon size and viral loads was confirmed using 1st-WHO-International-Standard and patient samples. DNase pre-treatment reduced plasma CMV loads by >90% indicating the presence of unprotected CMV genomic DNA. Sequence variability, amplicon length, and non-encapsidated genomes obstruct standardization and commutability of CMV loads needed to develop thresholds for clinical research and management. Besides regular sequence surveys, matrix and extraction standardization, we propose developing reference calibrators using 100 bp amplicons. Copyright © 2018 Elsevier B.V. All

  13. Measures of uncertainty, importance and sensitivity of the SEDA code

    International Nuclear Information System (INIS)

    Baron, J.; Caruso, A.; Vinate, H.

    1996-01-01

    The purpose of this work is the estimation of the uncertainty on the results of the SEDA code (Sistema de Evaluacion de Dosis en Accidentes) in accordance with the input data and its parameters. The SEDA code has been developed by the Comision Nacional de Energia Atomica for the estimation of doses during emergencies in the vicinity of Atucha and Embalse, nuclear power plants. The user should feed the code with meteorological data, source terms and accident data (timing involved, release height, thermal content of the release, etc.) It is designed to be used during the emergency, and to bring fast results that enable to make decisions. The uncertainty in the results of the SEDA code is quantified in the present paper. This uncertainty is associated both with the data the user inputs to the code, and with the uncertain parameters of the code own models. The used method consisted in the statistical characterization of the parameters and variables, assigning them adequate probability distributions. These distributions have been sampled with the Latin Hypercube Sampling method, which is a stratified multi-variable Monte-Carlo technique. The code has been performed for each of the samples and finally, a result sample has been obtained. These results have been characterized from the statistical point of view (obtaining their mean, most probable value, distribution shape, etc.) for several distances from the source. Finally, the Partial Correlation Coefficients and Standard Regression Coefficients techniques have been used to obtain the relative importance of each input variable, and the Sensitivity of the code to its variations. The measures of Importance and Sensitivity have been obtained for several distances from the source and various cases of atmospheric stability, making comparisons possible. This paper allowed to confide in the results of the code, and the association of their uncertainty to them, as a way to know the limits in which the results can vary in a real

  14. [Reproductive effort, fattening index and yield of Arca zebra (Filibranchia: Arcidae) by length and its association with environmental variables, Sucre, Venezuela].

    Science.gov (United States)

    Lista, María; Velásquez, Carlos; Prieto, Antulio; Longart, Yelipza

    2016-06-01

    Arca zebra is a mollusk of commercial value and a major socioeconomic fishery in Northeastern Venezuela. The present study aimed to evaluate the reproductive effort (RE), fattening index (FI) and yield (Y) in different size groups of A. zebra from the morro Chacopata, Venezuela. For this, monthly samplings from June 2008 and June 2009, were undertaken, and the bivalves obtained were distributed in three length groups: I (30.1 to 50.0 mm), II (50.1 to 70.0 mm) and III (> 70.0 mm). Monthly RE, FI and Y were determined based on bivalve changes in volume of fresh meat (VFM), intervalvar volume (IV), dry gonad biomass (DW), dry biomass of the organism without gonad (DWs), fresh biomass of meat (FBM) and total biomass including shell (TBIS). Besides, environmental variables such as temperature, salinity, dissolved oxygen, total organic and inorganic seston and chlorophyll a were measured monthly. There was great variation in the DW between length groups (relevant for II and III): increased from June until late September 2008, was followed by a marked decrease in October 2008, recovered in the following months, and decreased in January 2009, with a slight increase until May 2009; these changes were associated with variations in sea temperature. The weight of the gonad (DW) influenced the RE, FI and Y, as these reached their peaks in the months where there was higher gonadal production, indicating the influence of temperature on A. zebra reproduction.

  15. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  16. Independent rate and temporal coding in hippocampal pyramidal cells.

    Science.gov (United States)

    Huxter, John; Burgess, Neil; O'Keefe, John

    2003-10-23

    In the brain, hippocampal pyramidal cells use temporal as well as rate coding to signal spatial aspects of the animal's environment or behaviour. The temporal code takes the form of a phase relationship to the concurrent cycle of the hippocampal electroencephalogram theta rhythm. These two codes could each represent a different variable. However, this requires the rate and phase to vary independently, in contrast to recent suggestions that they are tightly coupled, both reflecting the amplitude of the cell's input. Here we show that the time of firing and firing rate are dissociable, and can represent two independent variables: respectively the animal's location within the place field, and its speed of movement through the field. Independent encoding of location together with actions and stimuli occurring there may help to explain the dual roles of the hippocampus in spatial and episodic memory, or may indicate a more general role of the hippocampus in relational/declarative memory.

  17. Protograph LDPC Codes with Node Degrees at Least 3

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  18. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    Science.gov (United States)

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  19. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    Science.gov (United States)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  20. Implementation of a tree algorithm in MCNP code for nuclear well logging applications

    Energy Technology Data Exchange (ETDEWEB)

    Li Fusheng, E-mail: fusheng.li@bakerhughes.com [Baker Hughes Incorporated, 2001 Rankin Rd. Houston, TX 77073-5101 (United States); Han Xiaogang [Baker Hughes Incorporated, 2001 Rankin Rd. Houston, TX 77073-5101 (United States)

    2012-07-15

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. - Highlights: Black-Right-Pointing-Pointer Tree structure programming is suitable for Monte-Carlo based particle tracking. Black-Right-Pointing-Pointer Enhanced pulse height tally is developed for oilwell logging tool simulation. Black-Right-Pointing-Pointer Neutron interaction tally and gamma ray index tally for geochemical logging.

  1. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  2. PREREM: an interactive data preprocessing code for INREM II. Part I: user's manual. Part II: code structure

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, M.T.; Fields, D.E.

    1981-05-01

    PREREM is an interactive computer code developed as a data preprocessor for the INREM-II (Killough, Dunning, and Pleasant, 1978a) internal dose program. PREREM is intended to provide easy access to current and self-consistent nuclear decay and radionuclide-specific metabolic data sets. Provision is made for revision of metabolic data, and the code is intended for both production and research applications. Documentation for the code is in two parts. Part I is a user's manual which emphasizes interpretation of program prompts and choice of user input. Part II stresses internal structure and flow of program control and is intended to assist the researcher who wishes to revise or modify the code or add to its capabilities. PREREM is written for execution on a Digital Equipment Corporation PDP-10 System and much of the code will require revision before it can be run on other machines. The source program length is 950 lines (116 blocks) and computer core required for execution is 212 K bytes. The user must also have sufficient file space for metabolic and S-factor data sets. Further, 64 100 K byte blocks of computer storage space are required for the nuclear decay data file. Computer storage space must also be available for any output files produced during the PREREM execution. 9 refs., 8 tabs.

  3. The Analysis of SBWR Critical Power Bundle Using Cobrag Code

    Directory of Open Access Journals (Sweden)

    Yohannes Sardjono

    2013-03-01

    Full Text Available The coolant mechanism of SBWR is similar with the Dodewaard Nuclear Power Plant (NPP in the Netherlands that first went critical in 1968. The similarity of both NPP is cooled by natural convection system. These coolant concept is very related with same parameters on fuel bundle design especially fuel bundle length, core pressure drop and core flow rate as well as critical power bundle. The analysis was carried out by using COBRAG computer code. COBRAG computer code is GE Company proprietary. Basically COBRAG computer code is a tool to solve compressible three-dimensional, two fluid, three field equations for two phase flow. The three fields are the vapor field, the continuous liquid field, and the liquid drop field. This code has been applied to analyses model flow and heat transfer within the reactor core. This volume describes the finitevolume equations and the numerical solution methods used to solve these equations. This analysis of same parameters has been done i.e.; inlet sub cooling 20 BTU/lbm and 40 BTU/lbm, 1000 psi pressure and R-factor is 1.038, mass flux are 0.5 Mlb/hr.ft2, 0.75 Mlb/hr.ft2, 1.00 Mlb/hr.ft2 and 1.25 Mlb/hr.ft2. Those conditions based on history operation of some type of the cell fuel bundle line at GE Nuclear Energy. According to the results, it can be concluded that SBWR critical power bundle is 10.5 % less than current BWR critical power bundle with length reduction of 12 ft to 9 ft.

  4. An axisymmetric gravitational collapse code

    Energy Technology Data Exchange (ETDEWEB)

    Choptuik, Matthew W [CIAR Cosmology and Gravity Program, Department of Physics and Astronomy, University of British Columbia, Vancouver BC, V6T 1Z1 (Canada); Hirschmann, Eric W [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84604 (United States); Liebling, Steven L [Southampton College, Long Island University, Southampton, NY 11968 (United States); Pretorius, Frans [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)

    2003-05-07

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations.

  5. An axisymmetric gravitational collapse code

    International Nuclear Information System (INIS)

    Choptuik, Matthew W; Hirschmann, Eric W; Liebling, Steven L; Pretorius, Frans

    2003-01-01

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations

  6. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  7. Highly parallel line-based image coding for many cores.

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Zhou, You; Wu, Feng

    2012-01-01

    Computers are developing along with a new trend from the dual-core and quad-core processors to ones with tens or even hundreds of cores. Multimedia, as one of the most important applications in computers, has an urgent need to design parallel coding algorithms for compression. Taking intraframe/image coding as a start point, this paper proposes a pure line-by-line coding scheme (LBLC) to meet the need. In LBLC, an input image is processed line by line sequentially, and each line is divided into small fixed-length segments. The compression of all segments from prediction to entropy coding is completely independent and concurrent at many cores. Results on a general-purpose computer show that our scheme can get a 13.9 times speedup with 15 cores at the encoder and a 10.3 times speedup at the decoder. Ideally, such near-linear speeding relation with the number of cores can be kept for more than 100 cores. In addition to the high parallelism, the proposed scheme can perform comparatively or even better than the H.264 high profile above middle bit rates. At near-lossless coding, it outperforms H.264 more than 10 dB. At lossless coding, up to 14% bit-rate reduction is observed compared with H.264 lossless coding at the high 4:4:4 profile.

  8. Differences in caregiver daily impression by sex, education and career length.

    Science.gov (United States)

    Ae, Ryusuke; Kojo, Takao; Kotani, Kazuhiko; Okayama, Masanobu; Kuwabara, Masanari; Makino, Nobuko; Aoyama, Yasuko; Sano, Takashi; Nakamura, Yosikazu

    2017-03-01

    We previously proposed the concept of caregiver daily impression (CDI) as a practical tool for emergency triage. We herein assessed how CDI varies by sex, education and career length by determining CDI scores as quantitative outcome measures. We carried out a cross-sectional study using a self-reported questionnaire among caregivers in 20 long-term care facilities in Hyogo, Japan. A total of 10 CDI variables measured participants' previous experience of emergency transfers using a scale from 0-10. The resulting total was defined as the CDI score. We hypothetically considered that higher scores indicated greater caregiver focus. The CDI scores were compared by sex, education and career length using analysis of covariance. A total of 601 personal caregivers were evaluated (mean age 36.7 years; 36% men). The mean career length was 6.9 years, with the following groupings: 1-4 years (38%), 5-9 years (37%) and >10 years (24%). After adjustment for sex and education, the CDI scores for the variable, "poor eye contact," significantly differed between caregivers with ≥10 and Sex-related differences in CDI might also exist. Geriatr Gerontol Int 2016; 17: 410-415. © 2016 Japan Geriatrics Society.

  9. Spherical conducting probes in finite Debye length plasmas and E x B fields

    International Nuclear Information System (INIS)

    Patacchini, Leonardo; Hutchinson, Ian H

    2011-01-01

    The particle-in-cell code SCEPTIC3D (Patacchini and Hutchinson 2010 Plasma Phys. Control. Fusion 52 035005) is used to calculate the interaction of a transversely flowing magnetized plasma with a negatively charged spherical conductor, in the entire range of magnetization and Debye length. The results allow the first fully self-consistent analysis of probe operation where neither the ion Larmor radius nor the Debye length are approximated by zero or infinity. An important transition in plasma structure occurs when the Debye length exceeds the average ion Larmor radius, as the sphere starts to shield the convective electric field driving the flow. A remarkable result is that in those conditions, the ion current can significantly exceed the unmagnetized orbital motion limit. When both the Debye length and the Larmor radius are small compared with the probe dimensions, however, their ratio does not affect the collection pattern significantly, and Mach-probe calibration methods derived in the context of quasineutral strongly magnetized plasmas (Patacchini and Hutchinson 2009 Phys. Rev. E 80 036403) hold for Debye lengths and ion Larmor radii smaller than about 10% of the probe radius.

  10. Integrative annotation of 21,037 human genes validated by full-length cDNA clones.

    Directory of Open Access Journals (Sweden)

    Tadashi Imanishi

    2004-06-01

    Full Text Available The human genome sequence defines our inherent biological potential; the realization of the biology encoded therein requires knowledge of the function of each gene. Currently, our knowledge in this area is still limited. Several lines of investigation have been used to elucidate the structure and function of the genes in the human genome. Even so, gene prediction remains a difficult task, as the varieties of transcripts of a gene may vary to a great extent. We thus performed an exhaustive integrative characterization of 41,118 full-length cDNAs that capture the gene transcripts as complete functional cassettes, providing an unequivocal report of structural and functional diversity at the gene level. Our international collaboration has validated 21,037 human gene candidates by analysis of high-quality full-length cDNA clones through curation using unified criteria. This led to the identification of 5,155 new gene candidates. It also manifested the most reliable way to control the quality of the cDNA clones. We have developed a human gene database, called the H-Invitational Database (H-InvDB; http://www.h-invitational.jp/. It provides the following: integrative annotation of human genes, description of gene structures, details of novel alternative splicing isoforms, non-protein-coding RNAs, functional domains, subcellular localizations, metabolic pathways, predictions of protein three-dimensional structure, mapping of known single nucleotide polymorphisms (SNPs, identification of polymorphic microsatellite repeats within human genes, and comparative results with mouse full-length cDNAs. The H-InvDB analysis has shown that up to 4% of the human genome sequence (National Center for Biotechnology Information build 34 assembly may contain misassembled or missing regions. We found that 6.5% of the human gene candidates (1,377 loci did not have a good protein-coding open reading frame, of which 296 loci are strong candidates for non-protein-coding RNA

  11. Interannual variations in length-of-day (LOD) as a tool to assess climate variability and climate change

    Science.gov (United States)

    Lehmann, E.

    2016-12-01

    On interannual time scales the atmosphere affects significantly fluctuations in the geodetic quantity of length-of-day (LOD). This effect is directly proportional to perturbations in the relative angular momentum of the atmosphere (AAM) computed from zonal winds. During El Niño events tropospheric westerlies increase due to elevated sea surface temperatures (SST) in the Pacific inducing peak anomalies in relative AAM and correspondingly, in LOD. However, El Niño events affect LOD variations differently strong and the causes of this varying effect are yet not clear. Here, we investigate the LOD-El Niño relationship in the 20th and 21st century (1982-2100) whether the quantity of LOD can be used as a geophysical tool to assess variability and change in a future climate. In our analysis we applied a windowed discrete Fourier transform on all de-seasonalized data to remove climatic signals outside of the El Niño frequency band. LOD (data: IERS) was related in space and time to relative AAM and SSTs (data: ERA-40 reanalysis, IPCC ECHAM05-OM1 20C, A1B). Results from mapped Pearson correlation coefficients and time frequency behavior analysis identified a teleconnection pattern that we term the EN≥65%-index. The EN≥65%-index prescribes a significant change in variation in length-of-day of +65% and more related to (1) SST anomalies of >2° in the Pacific Niño region (160°E-80°W, 5°S-5°N), (2) corresponding stratospheric warming anomalies of the quasi-biennial oscillation (QBO), and (3) strong westerly winds in the lower equatorial stratosphere. In our analysis we show that the coupled atmosphere-ocean conditions prescribed in the EN≥65%-index apply to the extreme El Niño events of 19982/83 and 1997/98, and to 75% of all El Niño events in the last third of the 21st century. At that period of time the EN≥65%-index describes a projected altered base state of the equatorial Pacific that shows almost continuous El Niño conditions under climate warming.

  12. KAMCCO, a reactor physics Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1976-06-01

    KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de

  13. The Chain-Length Distribution in Subcritical Systems

    International Nuclear Information System (INIS)

    Nolen, Steven Douglas

    2000-01-01

    The individual fission chains that appear in any neutron multiplying system provide a means, via neutron noise analysis, to unlock a wealth of information regarding the nature of the system. This work begins by determining the probability density distributions for fission chain lengths in zero-dimensional systems over a range of prompt neutron multiplication constant (K) values. This section is followed by showing how the integral representation of the chain-length distribution can be used to obtain an estimate of the system's subcritical prompt multiplication (MP). The lifetime of the chains is then used to provide a basis for determining whether a neutron noise analysis will be successful in assessing the neutron multiplication constant, k, of the system in the presence of a strong intrinsic source. A Monte Carlo transport code, MC++, is used to model the evolution of the individual fission chains and to determine how they are influenced by spatial effects. The dissertation concludes by demonstrating how experimental validation of certain global system parameters by neutron noise analysis may be precluded in situations in which the system K is relatively low and in which realistic detector efficiencies are simulated

  14. The Chain-Length Distribution in Subcritical Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Texas A & M Univ., College Station, TX (United States)

    2000-06-01

    The individual fission chains that appear in any neutron multiplying system provide a means, via neutron noise analysis, to unlock a wealth of information regarding the nature of the system. This work begins by determining the probability density distributions for fission chain lengths in zero-dimensional systems over a range of prompt neutron multiplication constant (K) values. This section is followed by showing how the integral representation of the chain-length distribution can be used to obtain an estimate of the system's subcritical prompt multiplication (MP). The lifetime of the chains is then used to provide a basis for determining whether a neutron noise analysis will be successful in assessing the neutron multiplication constant, k, of the system in the presence of a strong intrinsic source. A Monte Carlo transport code, MC++, is used to model the evolution of the individual fission chains and to determine how they are influenced by spatial effects. The dissertation concludes by demonstrating how experimental validation of certain global system parameters by neutron noise analysis may be precluded in situations in which the system K is relatively low and in which realistic detector efficiencies are simulated.

  15. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  16. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  17. The length of stay determinants for sun-and-sand tourism: An application for the Region of Murcia

    Directory of Open Access Journals (Sweden)

    Sánchez García, Juan Francisco

    2008-01-01

    Full Text Available While tourist arrivals increase annually in Spain, tourist average real expenditure has decreased significantly over the last few years, with important effects on tourism revenues. The process is clearly driven by the reduction of the length of stay of tourists at destinations, but surprisingly this variable has received little attention in the literature. We estimate a length of stay function for sun-and-sand tourists visiting the Region of Murcia over the period 2002-2006 using count data models. Our results show that both tourists’ personal and family characteristics together with economic variables (budget restrictions, income and prices are key factors in determining the duration of the stay. Quantitative identification of the determinants of a tourist’s length of stay could provide important guidelines for designing policies aimed at influencing length of stay in tourist‘s seaside destinations.

  18. Sperm length, sperm storage and mating system characteristics in bumblebees

    DEFF Research Database (Denmark)

    Baer, Boris; Schmid-Hempel, Paul; Høeg, Jens Thorvald

    2003-01-01

    -term storage of sperm, using three bumblebee species with different mating systems as models. We show that individual males produce only one size-class of sperm, but that sperm length is highly variable among brothers, among unrelated conspecific males, and among males of different species. Males of Bombus...

  19. Population coding in sparsely connected networks of noisy neurons

    OpenAIRE

    Tripp, Bryan P.; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...

  20. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  1. High-Fidelity Coding with Correlated Neurons

    Science.gov (United States)

    da Silveira, Rava Azeredo; Berry, Michael J.

    2014-01-01

    Positive correlations in the activity of neurons are widely observed in the brain. Previous studies have shown these correlations to be detrimental to the fidelity of population codes, or at best marginally favorable compared to independent codes. Here, we show that positive correlations can enhance coding performance by astronomical factors. Specifically, the probability of discrimination error can be suppressed by many orders of magnitude. Likewise, the number of stimuli encoded—the capacity—can be enhanced more than tenfold. These effects do not necessitate unrealistic correlation values, and can occur for populations with a few tens of neurons. We further show that both effects benefit from heterogeneity commonly seen in population activity. Error suppression and capacity enhancement rest upon a pattern of correlation. Tuning of one or several effective parameters can yield a limit of perfect coding: the corresponding pattern of positive correlation leads to a ‘lock-in’ of response probabilities that eliminates variability in the subspace relevant for stimulus discrimination. We discuss the nature of this pattern and we suggest experimental tests to identify it. PMID:25412463

  2. Code division multiple-access techniques in optical fiber networks. II - Systems performance analysis

    Science.gov (United States)

    Salehi, Jawad A.; Brackett, Charles A.

    1989-08-01

    A technique based on optical orthogonal codes was presented by Salehi (1989) to establish a fiber-optic code-division multiple-access (FO-CDMA) communications system. The results are used to derive the bit error rate of the proposed FO-CDMA system as a function of data rate, code length, code weight, number of users, and receiver threshold. The performance characteristics for a variety of system parameters are discussed. A means of reducing the effective multiple-access interference signal by placing an optical hard-limiter at the front end of the desired optical correlator is presented. Performance calculations are shown for the FO-CDMA with an ideal optical hard-limiter, and it is shown that using a optical hard-limiter would, in general, improve system performance.

  3. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  4. FAST: a three-dimensional time-dependent FEL simulation code

    International Nuclear Information System (INIS)

    Saldin, E.L.; Schneidmiller, E.A.; Yurkov, M.V.

    1999-01-01

    In this report we briefly describe the three-dimensional, time-dependent FEL simulation code FAST. The equations of motion of the particles and Maxwell's equations are solved simultaneously taking into account the slippage effect. Radiation fields are calculated using an integral solution of Maxwell's equations. A special technique has been developed for fast calculations of the radiation field, drastically reducing the required CPU time. As a result, the developed code allows one to use a personal computer for time-dependent simulations. The code allows one to simulate the radiation from the electron bunch of any transverse and longitudinal bunch shape; to simulate simultaneously an external seed with superimposed noise in the electron beam; to take into account energy spread in the electron beam and the space charge fields; and to simulate a high-gain, high-efficiency FEL amplifier with a tapered undulator. It is important to note that there are no significant memory limitations in the developed code and an electron bunch of any length can be simulated

  5. Construction and Analysis of a Novel 2-D Optical Orthogonal Codes Based on Modified One-coincidence Sequence

    Science.gov (United States)

    Ji, Jianhua; Wang, Yanfen; Wang, Ke; Xu, Ming; Zhang, Zhipeng; Yang, Shuwen

    2013-09-01

    A new two-dimensional OOC (optical orthogonal codes) named PC/MOCS is constructed, using PC (prime code) for time spreading and MOCS (modified one-coincidence sequence) for wavelength hopping. Compared with PC/PC, the number of wavelengths for PC/MOCS is not limited to a prime number. Compared with PC/OCS, the length of MOCS need not be expanded to the same length of PC. PC/MOCS can be constructed flexibly, and also can use available wavelengths effectively. Theoretical analysis shows that PC/MOCS can reduce the bit error rate (BER) of OCDMA system, and can support more users than PC/PC and PC/OCS.

  6. Full-length cDNA sequences from Rhesus monkey placenta tissue: analysis and utility for comparative mapping

    Directory of Open Access Journals (Sweden)

    Lee Sang-Rae

    2010-07-01

    Full Text Available Abstract Background Rhesus monkeys (Macaca mulatta are widely-used as experimental animals in biomedical research and are closely related to other laboratory macaques, such as cynomolgus monkeys (Macaca fascicularis, and to humans, sharing a last common ancestor from about 25 million years ago. Although rhesus monkeys have been studied extensively under field and laboratory conditions, research has been limited by the lack of genetic resources. The present study generated placenta full-length cDNA libraries, characterized the resulting expressed sequence tags, and described their utility for comparative mapping with human RefSeq mRNA transcripts. Results From rhesus monkey placenta full-length cDNA libraries, 2000 full-length cDNA sequences were determined and 1835 rhesus placenta cDNA sequences longer than 100 bp were collected. These sequences were annotated based on homology to human genes. Homology search against human RefSeq mRNAs revealed that our collection included the sequences of 1462 putative rhesus monkey genes. Moreover, we identified 207 genes containing exon alterations in the coding region and the untranslated region of rhesus monkey transcripts, despite the highly conserved structure of the coding regions. Approximately 10% (187 of all full-length cDNA sequences did not represent any public human RefSeq mRNAs. Intriguingly, two rhesus monkey specific exons derived from the transposable elements of AluYRa2 (SINE family and MER11B (LTR family were also identified. Conclusion The 1835 rhesus monkey placenta full-length cDNA sequences described here could expand genomic resources and information of rhesus monkeys. This increased genomic information will greatly contribute to the development of evolutionary biology and biomedical research.

  7. Recent improvements of the TNG statistical model code

    International Nuclear Information System (INIS)

    Shibata, K.; Fu, C.Y.

    1986-08-01

    The applicability of the nuclear model code TNG to cross-section evaluations has been extended. The new TNG is capable of using variable bins for outgoing particle energies. Moreover, three additional quantities can now be calculated: capture gamma-ray spectrum, the precompound mode of the (n,γ) reaction, and fission cross section. In this report, the new features of the code are described together with some sample calculations and a brief explanation of the input data. 15 refs., 6 figs., 2 tabs

  8. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  9. A Semantic Analysis Method for Scientific and Engineering Code

    Science.gov (United States)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  10. Two-dimensional full-wave code for reflectometry simulations in TJ-II

    International Nuclear Information System (INIS)

    Blanco, E.; Heuraux, S.; Estrada, T.; Sanchez, J.; Cupido, L.

    2004-01-01

    A two-dimensional full-wave code in the extraordinary mode has been developed to simulate reflectometry in TJ-II. The code allows us to study the measurement capabilities of the future correlation reflectometer that is being installed in TJ-II. The code uses the finite-difference-time-domain technique to solve Maxwell's equations in the presence of density fluctuations. Boundary conditions are implemented by a perfectly matched layer to simulate free propagation. To assure the stability of the code, the current equations are solved by a fourth-order Runge-Kutta method. Density fluctuation parameters such as fluctuation level, wave numbers, and correlation lengths are extrapolated from those measured at the plasma edge using Langmuir probes. In addition, realistic plasma shape, density profile, magnetic configuration, and experimental setup of TJ-II are included to determine the plasma regimes in which accurate information may be obtained

  11. A restructuring of the MELCOR fission product packages for the MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S.H.; Kim, K.R.; Kim, D.H.

    2004-01-01

    The RN1/RN2 packages, which are the fission product-related packages in MELCOR, have been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and a modernized data structure. To do this, the data transferring methods of the current MELCOR code are modified and adopted into the RN1/RN2 package. The data structure of the current MELCOR code using FORTRAN77 has a difficulty in grasping the meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to user-defined data type, which leads to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN1/RN2 package addressed in this paper includes a module development, subroutine modification, and the treatment of MELGEN, which generates the data file, as well as MELCOR, which is processing the calculation. The verification has been done by comparing the results of the modified code with those of the existing code. As the trends are similar to each other, it implies that the same approach could be extended to the entire code package. It is expected that the code restructuring will accelerate the code domestication thanks to a direct understanding of each variable and an easy implementation of the modified or newly developed models. (author)

  12. Length quantization of DNA partially expelled from heads of a bacteriophage T3 mutant

    Energy Technology Data Exchange (ETDEWEB)

    Serwer, Philip, E-mail: serwer@uthscsa.edu [Department of Biochemistry, The University of Texas Health Science Center, 7703 Floyd Curl Drive, San Antonio, TX 78229-3900 (United States); Wright, Elena T. [Department of Biochemistry, The University of Texas Health Science Center, 7703 Floyd Curl Drive, San Antonio, TX 78229-3900 (United States); Liu, Zheng; Jiang, Wen [Markey Center for Structural Biology, Department of Biological Sciences, Purdue University, West Lafayette, IN 47907 (United States)

    2014-05-15

    DNA packaging of phages phi29, T3 and T7 sometimes produces incompletely packaged DNA with quantized lengths, based on gel electrophoretic band formation. We discover here a packaging ATPase-free, in vitro model for packaged DNA length quantization. We use directed evolution to isolate a five-site T3 point mutant that hyper-produces tail-free capsids with mature DNA (heads). Three tail gene mutations, but no head gene mutations, are present. A variable-length DNA segment leaks from some mutant heads, based on DNase I-protection assay and electron microscopy. The protected DNA segment has quantized lengths, based on restriction endonuclease analysis: six sharp bands of DNA missing 3.7–12.3% of the last end packaged. Native gel electrophoresis confirms quantized DNA expulsion and, after removal of external DNA, provides evidence that capsid radius is the quantization-ruler. Capsid-based DNA length quantization possibly evolved via selection for stalling that provides time for feedback control during DNA packaging and injection. - Graphical abstract: Highlights: • We implement directed evolution- and DNA-sequencing-based phage assembly genetics. • We purify stable, mutant phage heads with a partially leaked mature DNA molecule. • Native gels and DNase-protection show leaked DNA segments to have quantized lengths. • Native gels after DNase I-removal of leaked DNA reveal the capsids to vary in radius. • Thus, we hypothesize leaked DNA quantization via variably quantized capsid radius.

  13. Cooperative optimization and their application in LDPC codes

    Science.gov (United States)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  14. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  15. Perceptual scale expansion: an efficient angular coding strategy for locomotor space.

    Science.gov (United States)

    Durgin, Frank H; Li, Zhi

    2011-08-01

    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.

  16. Review of the margins for ASME code fatigue design curve - effects of surface roughness and material variability

    International Nuclear Information System (INIS)

    Chopra, O. K.; Shack, W. J.

    2003-01-01

    The ASME Boiler and Pressure Vessel Code provides rules for the construction of nuclear power plant components. The Code specifies fatigue design curves for structural materials. However, the effects of light water reactor (LWR) coolant environments are not explicitly addressed by the Code design curves. Existing fatigue strain-vs.-life ((var e psilon)-N) data illustrate potentially significant effects of LWR coolant environments on the fatigue resistance of pressure vessel and piping steels. This report provides an overview of the existing fatigue (var e psilon)-N data for carbon and low-alloy steels and wrought and cast austenitic SSs to define the effects of key material, loading, and environmental parameters on the fatigue lives of the steels. Experimental data are presented on the effects of surface roughness on the fatigue life of these steels in air and LWR environments. Statistical models are presented for estimating the fatigue (var e psilon)-N curves as a function of the material, loading, and environmental parameters. Two methods for incorporating environmental effects into the ASME Code fatigue evaluations are discussed. Data available in the literature have been reviewed to evaluate the conservatism in the existing ASME Code fatigue evaluations. A critical review of the margins for ASME Code fatigue design curves is presented

  17. Review of the margins for ASME code fatigue design curve - effects of surface roughness and material variability.

    Energy Technology Data Exchange (ETDEWEB)

    Chopra, O. K.; Shack, W. J.; Energy Technology

    2003-10-03

    The ASME Boiler and Pressure Vessel Code provides rules for the construction of nuclear power plant components. The Code specifies fatigue design curves for structural materials. However, the effects of light water reactor (LWR) coolant environments are not explicitly addressed by the Code design curves. Existing fatigue strain-vs.-life ({var_epsilon}-N) data illustrate potentially significant effects of LWR coolant environments on the fatigue resistance of pressure vessel and piping steels. This report provides an overview of the existing fatigue {var_epsilon}-N data for carbon and low-alloy steels and wrought and cast austenitic SSs to define the effects of key material, loading, and environmental parameters on the fatigue lives of the steels. Experimental data are presented on the effects of surface roughness on the fatigue life of these steels in air and LWR environments. Statistical models are presented for estimating the fatigue {var_epsilon}-N curves as a function of the material, loading, and environmental parameters. Two methods for incorporating environmental effects into the ASME Code fatigue evaluations are discussed. Data available in the literature have been reviewed to evaluate the conservatism in the existing ASME Code fatigue evaluations. A critical review of the margins for ASME Code fatigue design curves is presented.

  18. Influence of recording length on reporting status

    DEFF Research Database (Denmark)

    Biltoft-Jensen, Anja Pia; Matthiessen, Jeppe; Fagt, Sisse

    2009-01-01

    : To investigate the impact of recording length on reporting status, expressed as the ratio between energy intake and calculated basal metabolic rate (EI/BMR), the percentage of consumers of selected food items and the number reported food items per meal and eating occasions per day. Methods: Data from two...... in a validation study and the Danish National Survey of Dietary Habits and Physical Activity 2000-2002, respectively. Both studies had a cross-sectional design. Volunteers and participants completed a pre-coded food diary every day for 7 consecutive days. BMR was predicted from equations. Results......: In the validation study, EI/BMR was significantly lower on 1st, 2nd and 3rd consecutive recording days compared to 4-7 recording days (P food items...

  19. Design of Multiple Trellis-Coded Multi-h CPM Based on Super Trellis

    Directory of Open Access Journals (Sweden)

    X. Liu. A. Liu

    2012-12-01

    Full Text Available It has been shown that the multiple trellis code can perform better than the conventional trellis code over AWGN channels, at the cost of additional computations per trellis branch. Multiple trellis coded multi-h CPM schemes have been shown in the literature to have attractive power-bandwidth performance at the expense of increased receiver complexity. In this method, the multi-h format is made to be associated with the specific pattern and repeated rather than cyclically changed in time for successive symbol intervals, resulting in a longer effective length of the error event with better performance. It is well known that the rate (n-1/n multiple trellis codes combined with 2^n-level CPM have good power-bandwidth performance. In this paper, a scheme combining rate 1/2 and 2/3 multiple trellis codes with 4- and 8-level multi-h CPM is shown to have better power-bandwidth performance over the upper bound than the scheme with single-h.

  20. Finite Length Analysis of Irregular Repetition Slotted ALOHA in the Waterfall Region

    OpenAIRE

    Amat, Alexandre Graell i; Liva, Gianluigi

    2018-01-01

    A finite length analysis is introduced for irregular repetition slotted ALOHA (IRSA) that enables to accurately estimate its performance in the moderate-to-high packet loss probability regime, i.e., in the so-called waterfall region. The analysis is tailored to the collision channel model, which enables mapping the description of the successive interference cancellation process onto the iterative erasure decoding of low-density parity-check codes. The analysis provides accurate estimates of t...

  1. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  2. Midupper arm circumference and weight-for-length z scores have different associations with body composition

    DEFF Research Database (Denmark)

    Grijalva-Eternod, Carlos S; Wells, Jonathan Ck; Girma, Tsinuel

    2015-01-01

    understood. OBJECTIVE: We investigated the association between these 2 anthropometric indexes and body composition to help understand why they identify different children as wasted. DESIGN: We analyzed weight, length, MUAC, fat-mass (FM), and fat-free mass (FFM) data from 2470 measurements from 595 healthy...... Ethiopian infants obtained at birth and at 1.5, 2.5, 3.5, 4.5, and 6 mo of age. We derived WLZs by using 2006 WHO growth standards. We derived length-adjusted FM and FFM values as unexplained residuals after regressing each FM and FFM against length. We used a correlation analysis to assess associations...... between length, FFM, and FM (adjusted and nonadjusted for length) and the MUAC and WLZ and a multivariable regression analysis to assess the independent variability of length and length-adjusted FM and FFM with either the MUAC or the WLZ as the outcome. RESULTS: At all ages, length showed consistently...

  3. KUGEL: a thermal, hydraulic, fuel performance, and gaseous fission product release code for pebble bed reactor core analysis

    International Nuclear Information System (INIS)

    Shamasundar, B.I.; Fehrenbach, M.E.

    1981-05-01

    The KUGEL computer code is designed to perform thermal/hydraulic analysis and coated-fuel particle performance calculations for axisymmetric pebble bed reactor (PBR) cores. This computer code was developed as part of a Department of Energy (DOE)-funded study designed to verify the published core performance data on PBRs. The KUGEL code is designed to interface directly with the 2DB code, a two-dimensional neutron diffusion code, to obtain distributions of thermal power, fission rate, fuel burnup, and fast neutron fluence, which are needed for thermal/hydraulic and fuel performance calculations. The code is variably dimensioned so that problem size can be easily varied. An interpolation routine allows variable mesh size to be used between the 2DB output and the two-dimensional thermal/hydraulic calculations

  4. Coherent communication with continuous quantum variables

    Science.gov (United States)

    Wilde, Mark M.; Krovi, Hari; Brun, Todd A.

    2007-06-01

    The coherent bit (cobit) channel is a resource intermediate between classical and quantum communication. It produces coherent versions of teleportation and superdense coding. We extend the cobit channel to continuous variables by providing a definition of the coherent nat (conat) channel. We construct several coherent protocols that use both a position-quadrature and a momentum-quadrature conat channel with finite squeezing. Finally, we show that the quality of squeezing diminishes through successive compositions of coherent teleportation and superdense coding.

  5. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  6. 26 CFR 1.801-7 - Variable annuities.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Variable annuities. 1.801-7 Section 1.801-7...) INCOME TAXES Life Insurance Companies § 1.801-7 Variable annuities. (a) In general. (1) Section 801(g)(1) provides that for purposes of part I, subchapter L, chapter 1 of the Code, an annuity contract includes a...

  7. Spallation integral experiment analysis by high energy nucleon-meson transport code

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Meigo, Shin-ichiro; Sasa, Toshinobu; Fukahori, Tokio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Yoshizawa, Nobuaki; Furihata, Shiori; Belyakov-Bodin, V.I.; Krupny, G.I.; Titarenko, Y.E.

    1997-03-01

    Reaction rate distributions were measured with various activation detectors on the cylindrical surface of the thick tungsten target of 20 cm in diameter and 60 cm in length bombarded with the 0.895 and 1.21 GeV protons. The experimental results were analyzed with the Monte Carlo simulation code systems of NMTC/JAERI-MCNP-4A, LAHET and HERMES. It is confirmed that those code systems can represent the reaction rate distributions with the C/E ratio of 0.6 to 1.4 at the positions up to 30 cm from beam incident surface. (author)

  8. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  9. Stochastic geometry in PRIZMA code

    International Nuclear Information System (INIS)

    Malyshkin, G. N.; Kashaeva, E. A.; Mukhamadiev, R. F.

    2007-01-01

    The paper describes a method used to simulate radiation transport through random media - randomly placed grains in a matrix material. The method models the medium consequently from one grain crossed by particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes. Sort and size distributions of crossed grains were obtained and an algorithm was developed for sampling grain orientations and positions. Special consideration was given to medium modeling at the boundary of the stochastic region. The method was implemented in the universal 3D Monte Carlo code PRIZMA. The paper provides calculated results for a model problem where we determine volume fractions of modeled components crossed by particle trajectories. It also demonstrates the use of biased sampling techniques implemented in PRIZMA for solving a problem of deep penetration in model random media. Described are calculations for the spectral response of a capacitor dose detector whose anode was modeled with account for its stochastic structure. (authors)

  10. Sequence determinants of human microsatellite variability

    Directory of Open Access Journals (Sweden)

    Jakobsson Mattias

    2009-12-01

    Full Text Available Abstract Background Microsatellite loci are frequently used in genomic studies of DNA sequence repeats and in population studies of genetic variability. To investigate the effect of sequence properties of microsatellites on their level of variability we have analyzed genotypes at 627 microsatellite loci in 1,048 worldwide individuals from the HGDP-CEPH cell line panel together with the DNA sequences of these microsatellites in the human RefSeq database. Results Calibrating PCR fragment lengths in individual genotypes by using the RefSeq sequence enabled us to infer repeat number in the HGDP-CEPH dataset and to calculate the mean number of repeats (as opposed to the mean PCR fragment length, under the assumption that differences in PCR fragment length reflect differences in the numbers of repeats in the embedded repeat sequences. We find the mean and maximum numbers of repeats across individuals to be positively correlated with heterozygosity. The size and composition of the repeat unit of a microsatellite are also important factors in predicting heterozygosity, with tetra-nucleotide repeat units high in G/C content leading to higher heterozygosity. Finally, we find that microsatellites containing more separate sets of repeated motifs generally have higher heterozygosity. Conclusions These results suggest that sequence properties of microsatellites have a significant impact in determining the features of human microsatellite variability.

  11. Using QR codes to enable quick access to information in acute cancer care.

    Science.gov (United States)

    Upton, Joanne; Olsson-Brown, Anna; Marshall, Ernie; Sacco, Joseph

    2017-05-25

    Quick access to toxicity management information ensures timely access to steroids/immunosuppressive treatment for cancer patients experiencing immune-related adverse events, thus reducing length of hospital stays or avoiding hospital admission entirely. This article discusses a project to add a QR (quick response) code to a patient-held immunotherapy alert card. As QR code generation is free and the immunotherapy clinical management algorithms were already publicly available through the trust's clinical network website, the costs of integrating a QR code into the alert card, after printing, were low, while the potential benefits are numerous. Patient-held alert cards are widely used for patients receiving anti-cancer treatment, and this established standard of care has been modified to enable rapid access of information through the incorporation of a QR code.

  12. A novel construction method of QC-LDPC codes based on CRT for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  13. Applications guide to the MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1985-08-01

    A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed

  14. Distinct timescales of population coding across cortex.

    Science.gov (United States)

    Runyan, Caroline A; Piasini, Eugenio; Panzeri, Stefano; Harvey, Christopher D

    2017-08-03

    and that coupling is a variable property of cortical populations that affects the timescale of information coding and the accuracy of behaviour.

  15. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  16. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  17. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    Science.gov (United States)

    Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.

    2017-11-01

    This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  18. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    Directory of Open Access Journals (Sweden)

    Azura M. S. A.

    2017-01-01

    Full Text Available This paper presents a realization of Wavelength/Time (W/T Two-Dimensional Modified Double Weight (2-D MDW code for Optical Code Division Multiple Access (OCDMA system based on Spectral Amplitude Coding (SAC approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN and minimizing the Multiple Access Interference (MAI noises. At the permissible BER 10-9, the 2-D MDW (APD had shown minimum effective received power (Psr = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN only received -61 dBm. The results show that 2-D MDW (APD has better performance in achieving same BER with longer optical fiber length and with less received power (Psr. Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  19. The length-weight and length-length relationships of bluefish, Pomatomus saltatrix (Linnaeus, 1766 from Samsun, middle Black Sea region

    Directory of Open Access Journals (Sweden)

    Melek Özpiçak

    2017-10-01

    Full Text Available In this study, length-weight relationship (LWR and length-length relationship (LLR of bluefish, Pomatomus saltatrix were determined. A total of 125 specimens were sampled from Samsun, the middle Black Sea in 2014 fishing season. Bluefish specimens were monthly collected from commercial fishing boats from October to December 2014. All captured individuals (N=125 were measured to the nearest 0.1 cm for total, fork and standard lengths. The weight of each fish (W was recorded to the nearest 0.01 g. According to results of analyses, there were no statistically significant differences between sexes in term of length and weight (P˃0.05. The minimum and maximum total, fork and standard lengths of bluefish ranged between 13.5-23.6 cm, 12.50-21.80 cm and 10.60-20.10 cm, respectively. The equation of length-weight relationship were calculated as W=0.008TL3.12 (r2>0.962. Positive allometric growth was observed for bluefish (b>3. Length-length relationship was also highly significant (P<0.001 with coefficient of determination (r2 ranging from 0.916 to 0.988.

  20. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  1. Development of Ultrasonic Pulse Compression Using Golay Codes

    International Nuclear Information System (INIS)

    Kim, Young H.; Kim, Young Gil; Jeong, Peter

    1994-01-01

    Conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer. However, these systems are limited in pulse energy. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse causes decrease of the resolution. Using the pulse compression, a long pulse of pseudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was implemented into an ultrasonic system. Golay code was used as a pseudorandom signal in this system, since pair sum of autocorrelations has no sidelobe. The equivalent input pulse of the Golay code was derived to analyze the pulse compression system. Throughout the experiment, the pulse compression technique has demonstrated for its improved SNR(signal to noise ratio) by reducing the system's white noise. And the experimental data also indicated that the SNR enhancement was proportional to the square root of the code length used. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers

  2. SNR in ultrasonic pluse compression using Golay codes

    International Nuclear Information System (INIS)

    Kim, Young Hwan; Kim, Young Gil; Jeong, Peter

    1994-01-01

    The conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer, however, these systems are limited in average transmit power. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse cuases decrease of the resolution. Using the pulse compression, a long pulse of psudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was utilized to the ultrasonic system. Golay code was used as a psudorandom signal in this system, since pair sum of auto-correlations has not sidelobe. The equivalent input pulse of the Golay code was proposed to analyze the pulse compression system. In experiment, the material type, material thickness and code length were considered. As results, pulse compression system considerably reduced system's white noise, and approximately 30 dB improvement in SNR was obtained over the conventional ultrasonic system. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers.

  3. A Novel Technique to Detect Code for SAC-OCDMA System

    Science.gov (United States)

    Bharti, Manisha; Kumar, Manoj; Sharma, Ajay K.

    2018-04-01

    The main task of optical code division multiple access (OCDMA) system is the detection of code used by a user in presence of multiple access interference (MAI). In this paper, new method of detection known as XOR subtraction detection for spectral amplitude coding OCDMA (SAC-OCDMA) based on double weight codes has been proposed and presented. As MAI is the main source of performance deterioration in OCDMA system, therefore, SAC technique is used in this paper to eliminate the effect of MAI up to a large extent. A comparative analysis is then made between the proposed scheme and other conventional detection schemes used like complimentary subtraction detection, AND subtraction detection and NAND subtraction detection. The system performance is characterized by Q-factor, BER and received optical power (ROP) with respect to input laser power and fiber length. The theoretical and simulation investigations reveal that the proposed detection technique provides better quality factor, security and received power in comparison to other conventional techniques. The wide opening of eye in case of proposed technique also proves its robustness.

  4. Backprojection filtering for variable orbit fan-beam tomography

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Zeng, G.L.

    1995-01-01

    Backprojection filtering algorithms are presented for three variable Orbit fan-beam geometries. Expressions for the fan beam projection and backprojection operators are given for a flat detector fan-beam geometry with fixed focal length, with variable focal length, and with fixed focal length and off-center focusing. Backprojection operators are derived for each geometry using transformation of coordinates to transform from a parallel geometry backprojector to a fan-beam backprojector for the appropriate geometry. The backprojection operator includes a factor which is a function of the coordinates of the projection ray and the coordinates of the pixel in the backprojected image. The backprojection filtering algorithm first backprojects the variable orbit fan-beam projection data using the appropriately derived backprojector to obtain a 1/r blurring of the original image then takes the two-dimensional (2D) Fast Fourier Transform (FFT) of the backprojected image, then multiples the transformed image by the 2D ramp filter function, and finally takes the inverse 2D FFT to obtain the reconstructed image. Computer simulations verify that backprojectors with appropriate weighting give artifact free reconstructions of simulated line integral projections. Also, it is shown that it is not necessary to assume a projection model of line integrals, but the projector and backprojector can be defined to model the physics of the imaging detection process. A backprojector for variable orbit fan-beam tomography with fixed focal length is derived which includes an additional factor which is a function of the flux density along the flat detector. It is shown that the impulse response for the composite of the projection and backprojection operations is equal to 1/r

  5. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    International Nuclear Information System (INIS)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun; Sasaki, Masahide

    2004-01-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decoding in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques

  6. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    Science.gov (United States)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  7. Meiotic gene-conversion rate and tract length variation in the human genome.

    Science.gov (United States)

    Padhukasahasram, Badri; Rannala, Bruce

    2013-02-27

    Meiotic recombination occurs in the form of two different mechanisms called crossing-over and gene-conversion and both processes have an important role in shaping genetic variation in populations. Although variation in crossing-over rates has been studied extensively using sperm-typing experiments, pedigree studies and population genetic approaches, our knowledge of variation in gene-conversion parameters (ie, rates and mean tract lengths) remains far from complete. To explore variability in population gene-conversion rates and its relationship to crossing-over rate variation patterns, we have developed and validated using coalescent simulations a comprehensive Bayesian full-likelihood method that can jointly infer crossing-over and gene-conversion rates as well as tract lengths from population genomic data under general variable rate models with recombination hotspots. Here, we apply this new method to SNP data from multiple human populations and attempt to characterize for the first time the fine-scale variation in gene-conversion parameters along the human genome. We find that the estimated ratio of gene-conversion to crossing-over rates varies considerably across genomic regions as well as between populations. However, there is a great degree of uncertainty associated with such estimates. We also find substantial evidence for variation in the mean conversion tract length. The estimated tract lengths did not show any negative relationship with the local heterozygosity levels in our analysis.European Journal of Human Genetics advance online publication, 27 February 2013; doi:10.1038/ejhg.2013.30.

  8. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  9. Describing the interannual variability of precipitation with the derived distribution approach: effects of record length and resolution

    Directory of Open Access Journals (Sweden)

    C. I. Meier

    2016-10-01

    Full Text Available Interannual variability of precipitation is traditionally described by fitting a probability model to yearly precipitation totals. There are three potential problems with this approach: a long record (at least 25–30 years is required in order to fit the model, years with missing rainfall data cannot be used, and the data need to be homogeneous, i.e., one has to assume stationarity. To overcome some of these limitations, we test an alternative methodology proposed by Eagleson (1978, based on the derived distribution (DD approach. It allows estimation of the probability density function (pdf of annual rainfall without requiring long records, provided that continuously gauged precipitation data are available to derive external storm properties. The DD approach combines marginal pdfs for storm depths and inter-arrival times to obtain an analytical formulation of the distribution of annual precipitation, under the simplifying assumptions of independence between events and independence between storm depth and time to the next storm. Because it is based on information about storms and not on annual totals, the DD can make use of information from years with incomplete data; more importantly, only a few years of rainfall measurements should suffice to estimate the parameters of the marginal pdfs, at least at locations where it rains with some regularity. For two temperate locations in different climates (Concepción, Chile, and Lugano, Switzerland, we randomly resample shortened time series to evaluate in detail the effects of record length on the DD, comparing the results with the traditional approach of fitting a normal (or lognormal distribution. Then, at the same two stations, we assess the biases introduced in the DD when using daily totalized rainfall, instead of continuously gauged data. Finally, for randomly selected periods between 3 and 15 years in length, we conduct full blind tests at 52 high-quality gauging stations in Switzerland

  10. Estimating variability in placido-based topographic systems.

    Science.gov (United States)

    Kounis, George A; Tsilimbaris, Miltiadis K; Kymionis, George D; Ginis, Harilaos S; Pallikaris, Ioannis G

    2007-10-01

    To describe a new software tool for the detailed presentation of corneal topography measurements variability by means of color-coded maps. Software was developed in Visual Basic to analyze and process a series of 10 consecutive measurements obtained by a topographic system on calibration spheres, and individuals with emmetropic, low, high, and irregular astigmatic corneas. Corneal surface was segmented into 1200 segments and the coefficient of variance of each segment's keratometric dioptric power was used as the measure of variability. The results were presented graphically in color-coded maps (Variability Maps). Two topographic systems, the TechnoMed C-Scan and the TOMEY Topographic Modeling System (TMS-2N), were examined to demonstrate our method. Graphic representation of coefficient of variance offered a detailed representation of examination variability both in calibration surfaces and human corneas. It was easy to recognize an increase in variability, as the irregularity of examination surfaces increased. In individuals with high and irregular astigmatism, a variability pattern correlated with the pattern of corneal topography: steeper corneal areas possessed higher variability values compared with flatter areas of the same cornea. Numerical data permitted direct comparisons and statistical analysis. We propose a method that permits a detailed evaluation of the variability of corneal topography measurements. The representation of the results both graphically and quantitatively improves interpretability and facilitates a spatial correlation of variability maps with original topography maps. Given the popularity of topography based custom refractive ablations of the cornea, it is possible that variability maps may assist clinicians in the evaluation of corneal topography maps of patients with very irregular corneas, before custom ablation procedures.

  11. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  12. A new 3-D integral code for computation of accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab

  13. Optimization of path length stretching in Monte Carlo calculations for non-leakage problems

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands)

    2005-07-01

    Path length stretching (or exponential biasing) is a well known variance reduction technique in Monte Carlo calculations. It can especially be useful in shielding problems where particles have to penetrate a lot of material before being tallied. Several authors sought for optimization of the path length stretching parameter for detection of the leakage of neutrons from a slab. Here the adjoint function behaves as a single exponential function and can well be used to determine the stretching parameter. In this paper optimization is sought for a detector embedded in the system, which changes the adjoint function in the detector drastically. From literature it is known that the combination of path length stretching and angular biasing can result in appreciable variance reduction. However, angular biasing is not generally available in general purpose Monte Carlo codes and therefore we want to restrict ourselves to the application of pure path length stretching and finding optimum parameters for that. Nonetheless, the starting point for our research is the zero-variance scheme. In order to study the solution in detail the simplified monoenergetic two-direction model is adopted, which allows analytical solutions and can still be used in a Monte Carlo simulation. Knowing the zero-variance solution analytically, it is shown how optimum path length stretching parameters can be derived from it. It results in path length shrinking in the detector. Results for the variance in the detector response are shown in comparison with other patterns for the stretching parameter. The effect of anisotropic scattering on the path length stretching parameter is taken into account. (author)

  14. MARS CODE MANUAL VOLUME III - Programmer's Manual

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Hwang, Moon Kyu; Jeong, Jae Jun; Kim, Kyung Doo; Bae, Sung Won; Lee, Young Jin; Lee, Won Jae

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This programmer's manual provides a complete list of overall information of code structure and input/output function of MARS. In addition, brief descriptions for each subroutine and major variables used in MARS are also included in this report, so that this report would be very useful for the code maintenance. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  15. "ON ALGEBRAIC DECODING OF Q-ARY REED-MULLER AND PRODUCT REED-SOLOMON CODES"

    Energy Technology Data Exchange (ETDEWEB)

    SANTHI, NANDAKISHORE [Los Alamos National Laboratory

    2007-01-22

    We consider a list decoding algorithm recently proposed by Pellikaan-Wu for q-ary Reed-Muller codes RM{sub q}({ell}, m, n) of length n {le} q{sup m} when {ell} {le} q. A simple and easily accessible correctness proof is given which shows that this algorithm achieves a relative error-correction radius of {tau} {le} (1-{radical}{ell}q{sup m-1}/n). This is an improvement over the proof using one-point Algebraic-Geometric decoding method given in. The described algorithm can be adapted to decode product Reed-Solomon codes. We then propose a new low complexity recursive aJgebraic decoding algorithm for product Reed-Solomon codes and Reed-Muller codes. This algorithm achieves a relative error correction radius of {tau} {le} {Pi}{sub i=1}{sup m} (1 - {radical}k{sub i}/q). This algorithm is then proved to outperform the Pellikaan-Wu algorithm in both complexity and error correction radius over a wide range of code rates.

  16. Modeling RERTR experimental fuel plates using the PLATE code

    International Nuclear Information System (INIS)

    Hayes, S.L.; Meyer, M.K.; Hofman, G.L.; Snelgrove, J.L.; Brazener, R.A.

    2003-01-01

    Modeling results using the PLATE dispersion fuel performance code are presented for the U-Mo/Al experimental fuel plates from the RERTR-1, -2, -3 and -5 irradiation tests. Agreement of the calculations with experimental data obtained in post-irradiation examinations of these fuels, where available, is shown to be good. Use of the code to perform a series of parametric evaluations highlights the sensitivity of U-Mo dispersion fuel performance to fabrication variables, especially fuel particle shape and size distributions. (author)

  17. A computer code simulating multistage chemical exchange column under wide range of operating conditions

    International Nuclear Information System (INIS)

    Yamanishi, Toshihiko; Okuno, Kenji

    1996-09-01

    A computer code has been developed to simulate a multistage CECE(Combined Electrolysis Chemical Exchange) column. The solution of basic equations can be found out by the Newton-Raphson method. The independent variables are the atom fractions of D and T in each stage for the case where H is dominant within the column. These variables are replaced by those of H and T under the condition that D is dominant. Some effective techniques have also been developed to get a set of solutions of the basic equations: a setting procedure of initial values of the independent variables; and a procedure for the convergence of the Newton-Raphson method. The computer code allows us to simulate the column behavior under a wide range of the operating conditions. Even for a severe case, where the dominant species changes along the column height, the code can give a set of solutions of the basic equations. (author)

  18. Fast algorithm for two-dimensional data table use in hydrodynamic and radiative-transfer codes

    International Nuclear Information System (INIS)

    Slattery, W.L.; Spangenberg, W.H.

    1982-01-01

    A fast algorithm for finding interpolated atomic data in irregular two-dimensional tables with differing materials is described. The algorithm is tested in a hydrodynamic/radiative transfer code and shown to be of comparable speed to interpolation in regularly spaced tables, which require no table search. The concepts presented are expected to have application in any situation with irregular vector lengths. Also, the procedures that were rejected either because they were too slow or because they involved too much assembly coding are described

  19. [Renal length measured by ultrasound in adult mexican population].

    Science.gov (United States)

    Oyuela-Carrasco, J; Rodríguez-Castellanos, F; Kimura, E; Delgado-Hernández, R; Herrera-Félix, J P

    2009-01-01

    Renal length estimation by ultrasound is an important parameter in clinical evaluation of kidney disease and healthy donors. Changes in renal volume may be a sign of kidney disease. Correct interpretation of renal length requires the knowledge of normal limits, these have not been described for Latin American population. To describe normal renal length (RL) by ultrasonography in a group of Mexican adults. Ultrasound measure of RL in 153 healthy Mexican adults stratified by age. Describe the association of RL to several anthropometric variables. A total of 77 males and 76 females were scanner. The average age for the group was 44.12 +/- 15.44 years. The mean weight, body mass index (BMI) and height were 68.87 +/- 11.69 Kg, 26.77 +/- 3.82 kg/m2 and 160 +/- 8.62 cm respectively. Dividing the population by gender, showed a height of 166 +/- 6.15 cm for males and 154.7 +/- 5.97 cm for females (p =0.000). Left renal length (LRL) in the whole group was 105.8 +/- 7.56 mm and right renal length (RRL) was 104.3 +/- 6.45 mm (p = 0.000.) The LRL for males was 107.16 +/- 6.97 mm and for females was 104.6 +/- 7.96 mm. The average RRL for males was 105.74 +/- 5.74 mm and for females 102.99 +/- 6.85 mm (p = 0.008.) We noted that RL decreased with age and the rate of decline accelerates alter 60 years of age. Both lengths correlated significantly and positively with weight, BMI and height. The RL was significantly larger in males than in females in both kidneys (p = 0.036) in this Mexican population. Renal length declines after 60 years of age and specially after 70 years.

  20. Variable & Recode Definitions - SEER Documentation

    Science.gov (United States)

    Resources that define variables and provide documentation for reporting using SEER and related datasets. Choose from SEER coding and staging manuals plus instructions for recoding behavior, site, stage, cause of death, insurance, and several additional topics. Also guidance on months survived, calculating Hispanic mortality, and site-specific surgery.