WorldWideScience

Sample records for variable length coding

  1. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  2. Variable-length code construction for incoherent optical CDMA systems

    Science.gov (United States)

    Lin, Jen-Yung; Jhou, Jhih-Syue; Wen, Jyh-Horng

    2007-04-01

    The purpose of this study is to investigate the multirate transmission in fiber-optic code-division multiple-access (CDMA) networks. In this article, we propose a variable-length code construction for any existing optical orthogonal code to implement a multirate optical CDMA system (called as the multirate code system). For comparison, a multirate system where the lower-rate user sends each symbol twice is implemented and is called as the repeat code system. The repetition as an error-detection code in an ARQ scheme in the repeat code system is also investigated. Moreover, a parallel approach for the optical CDMA systems, which is proposed by Marić et al., is also compared with other systems proposed in this study. Theoretical analysis shows that the bit error probability of the proposed multirate code system is smaller than other systems, especially when the number of lower-rate users is large. Moreover, if there is at least one lower-rate user in the system, the multirate code system accommodates more users than other systems when the error probability of system is set below 10 -9.

  3. Lossless quantum data compression and variable-length coding

    International Nuclear Information System (INIS)

    Bostroem, Kim; Felbinger, Timo

    2002-01-01

    In order to compress quantum messages without loss of information it is necessary to allow the length of the encoded messages to vary. We develop a general framework for variable-length quantum messages in close analogy to the classical case and show that lossless compression is only possible if the message to be compressed is known to the sender. The lossless compression of an ensemble of messages is bounded from below by its von-Neumann entropy. We show that it is possible to reduce the number of qbits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. We give an explicit communication protocol that realizes lossless and instantaneous quantum data compression and apply it to a simple example. This protocol can be used for both online quantum communication and storage of quantum data

  4. Construction and performance research on variable-length codes for multirate OCDMA multimedia networks

    Science.gov (United States)

    Li, Chuan-qi; Yang, Meng-jie; Luo, De-jun; Lu, Ye; Kong, Yi-pu; Zhang, Dong-chuang

    2014-09-01

    A new kind of variable-length codes with good correlation properties for the multirate asynchronous optical code division multiple access (OCDMA) multimedia networks is proposed, called non-repetition interval (NRI) codes. The NRI codes can be constructed by structuring the interval-sets with no repetition, and the code length depends on the number of users and the code weight. According to the structural characteristics of NRI codes, the formula of bit error rate (BER) is derived. Compared with other variable-length codes, the NRI codes have lower BER. A multirate OCDMA multimedia simulation system is designed and built, the longer codes are assigned to the users who need slow speed, while the shorter codes are assigned to the users who need high speed. It can be obtained by analyzing the eye diagram that the user with slower speed has lower BER, and the conclusion is the same as the actual demand in multimedia data transport.

  5. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  6. Broadcasting a Common Message with Variable-Length Stop-Feedback codes

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2015-01-01

    We investigate the maximum coding rate achievable over a two-user broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder, which transmits...... itself in the absence of a square-root penalty in the asymptotic expansion of the maximum coding rate for large blocklengths, a result also known as zero dispersion. In this paper, we show that this speed-up does not necessarily occur for the broadcast channel with common message. Specifically...... continuously until it receives both stop signals. For the point-to-point case, Polyanskiy, Poor, and Verdú (2011) recently demonstrated that variable-length coding combined with stop feedback significantly increases the speed at which the maximum coding rate converges to capacity. This speed-up manifests...

  7. Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2016-01-01

    This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...

  8. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  9. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  10. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  11. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  12. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  13. Continuously variable focal length lens

    Science.gov (United States)

    Adams, Bernhard W; Chollet, Matthieu C

    2013-12-17

    A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.

  14. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  15. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  16. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  17. Design LDPC Codes without Cycles of Length 4 and 6

    Directory of Open Access Journals (Sweden)

    Kiseon Kim

    2008-04-01

    Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.

  18. Variable weight spectral amplitude coding for multiservice OCDMA networks

    Science.gov (United States)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  19. Continuous-variable quantum erasure correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    We experimentally demonstrate a continuous variable quantum erasure-correcting code, which protects coherent states of light against complete erasure. The scheme encodes two coherent states into a bi-party entangled state, and the resulting 4-mode code is conveyed through 4 independent channels...

  20. String matching with variable length gaps

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vildhøj, Hjalte Wedel

    2012-01-01

    primitive in computational biology applications. Let m and n be the lengths of P and T, respectively, and let k be the number of strings in P. We present a new algorithm achieving time O(nlogk+m+α) and space O(m+A), where A is the sum of the lower bounds of the lengths of the gaps in P and α is the total...... number of occurrences of the strings in P within T. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of m, n, k, A, and α. Our algorithm...

  1. Roughness Length Variability over Heterogeneous Surfaces

    Science.gov (United States)

    2010-03-01

    2004), the influence of variable roughness reaches its maximum at the height of local 0z and vanishes at the so- called blending height (Wieringa...the distribution of visibility restrictors such as low clouds, fog, haze, dust, and pollutants . An improved understanding of ABL structure...R. D., B. H. Lynn, A. Boone, W.-K. Tao, and J. Simpson, 2001: The influence of soil moisture, coastline curvature, and land-breeze circulations on

  2. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  3. New extremal binary self-dual codes of lengths 64 and 66 from bicubic planar graphs

    OpenAIRE

    Kaya, Abidin

    2016-01-01

    In this work, connected cubic planar bipartite graphs and related binary self-dual codes are studied. Binary self-dual codes of length 16 are obtained by face-vertex incidence matrices of these graphs. By considering their lifts to the ring R_2 new extremal binary self-dual codes of lengths 64 are constructed as Gray images. More precisely, we construct 15 new codes of length 64. Moreover, 10 new codes of length 66 were obtained by applying a building-up construction to the binary codes. Code...

  4. An Assessment of the Length and Variability of Mercury's Magnetotail

    Science.gov (United States)

    Milan, S. E.; Slavin, J. A.

    2011-01-01

    We employ Mariner 10 measurements of the interplanetary magnetic field in the vicinity of Mercury to estimate the rate of magnetic reconnection between the interplanetary magnetic field and the Hermean magnetosphere. We derive a time-series of the open magnetic flux in Mercury's magnetosphere. from which we can deduce the length of the magnetotail The length of the magnetotail is shown to be highly variable. with open field lines stretching between 15R(sub H) and 8S0R(sub H) downstream of the planet (median 150R(sub H)). Scaling laws allow the tail length at perihelion to be deduced from the aphelion Mariner 10 observations.

  5. Chaotic behaviour of a pendulum with variable length

    Energy Technology Data Exchange (ETDEWEB)

    Bartuccelli, M; Christiansen, P L; Muto, V; Soerensen, M P; Pedersen, N F

    1987-08-01

    The Melnikov function for the prediction of Smale horseshoe chaos is applied to a driven damped pendulum with variable length. Depending on the parameters, it is shown that this dynamical system undertakes heteroclinic bifurcations which are the source of the unstable chaotic motion. The analytical results are illustrated by new numerical simulations. Furthermore, using the averaging theorem, the stability of the subharmonics is studied.

  6. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    Science.gov (United States)

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  7. Components of genetic variability of ear length of silage maize

    Directory of Open Access Journals (Sweden)

    Sečanski Mile

    2006-01-01

    Full Text Available The objective of this study was to evaluate following parameters of the ear length of silage maize: variability of inbred lines and their diallel hybrids, superior-parent heterosis and genetic components of variability and habitability on the basis of a diallel set. The analysis of genetic variance shows that the additive component (D was lower than the dominant (H1 and H2 genetic variances, while the frequency of dominant genes (u for this trait was greater than the frequency of recessive genes (v Furthermore, this is also confirmed by the dominant to recessive genes ratio in parental inbreeds for the ear length (Kd/Kr> 1, which is greater than unity during both investigation years. The calculated value of the average degree of dominance √H1/D is greater than unity, pointing out to superdominance in inheritance of this trait in both years of investigation, which is also confirmed by the results of Vr/Wr regression analysis of inheritance of the ear length. As a presence of the non-allelic interaction was established it is necessary to study effects of epitasis as it can have a greater significance in certain hybrids. A greater value of dominant than additive variance resulted in high broad-sense habitability for ear length in both investigation years.

  8. Detecting Scareware by Mining Variable Length Instruction Sequences

    OpenAIRE

    Shahzad, Raja Khurram; Lavesson, Niklas

    2011-01-01

    Scareware is a recent type of malicious software that may pose financial and privacy-related threats to novice users. Traditional countermeasures, such as anti-virus software, require regular updates and often lack the capability of detecting novel (unseen) instances. This paper presents a scareware detection method that is based on the application of machine learning algorithms to learn patterns in extracted variable length opcode sequences derived from instruction sequences of binary files....

  9. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  10. The Classification of Complementary Information Set Codes of Lengths 14 and 16

    OpenAIRE

    Freibert, Finley

    2012-01-01

    In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\\'{e} defined a new class of rate one-half binary codes called \\emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they m...

  11. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    This paper addresses the issue of data compression in distributed speech recognition on the basis of a variable frame rate and length analysis method. The method first conducts frame selection by using a posteriori signal-to-noise ratio weighted energy distance to find the right time resolution...... length for steady regions. The method is applied to scalable source coding in distributed speech recognition where the target bitrate is met by adjusting the frame rate. Speech recognition results show that the proposed approach outperforms other compression methods in terms of recognition accuracy...... for noisy speech while achieving higher compression rates....

  12. On the Effects of Heterogeneous Packet Lengths on Network Coding

    DEFF Research Database (Denmark)

    Compta, Pol Torres; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Random linear network coding (RLNC) has been shown to provide increased throughput, security and robustness for the transmission of data through the network. Most of the analysis and the demonstrators have focused on the study of data packets with the same size (number of bytes). This constitutes...

  13. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  14. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  15. Synthesizer for decoding a coded short wave length irradiation

    International Nuclear Information System (INIS)

    1976-01-01

    The system uses point irradiation source, typically an X-ray emitter, which illuminates a three dimensional object consisting of a set of parallel planes, each of which acts as a source of coded information. The secondary source images are superimposed on a common flat screen. The decoding system comprises an imput light-screen detector, a picture screen amplifier, a beam deflector, on output picture screen, an optical focussing unit including three lenses, a masking unit, an output light screen detector and a video signal reproduction unit of cathode ray tube from, or similar, to create a three dimensional image of the object. (G.C.)

  16. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  17. Statistical screening of input variables in a complex computer code

    International Nuclear Information System (INIS)

    Krieger, T.J.

    1982-01-01

    A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results

  18. Variable Dimension Trellis-Coded Quantization of Sinusoidal Parameters

    DEFF Research Database (Denmark)

    Larsen, Morten Holm; Christensen, Mads G.; Jensen, Søren Holdt

    2008-01-01

    In this letter, we propose joint quantization of the parameters of a set of sinusoids based on the theory of trellis-coded quantization. A particular advantage of this approach is that it allows for joint quantization of a variable number of sinusoids, which is particularly relevant in variable...

  19. Fixed capacity and variable member grouping assignment of orthogonal variable spreading factor code tree for code division multiple access networks

    Directory of Open Access Journals (Sweden)

    Vipin Balyan

    2014-08-01

    Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.

  20. An Amplitude Spectral Capon Estimator with a Variable Filter Length

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Smaragdis, Paris; Christensen, Mads Græsbøll

    2012-01-01

    The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter...

  1. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    Science.gov (United States)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  2. Increased length of inpatient stay and poor clinical coding: audit of patients with diabetes.

    Science.gov (United States)

    Daultrey, Harriet; Gooday, Catherine; Dhatariya, Ketan

    2011-11-01

    People with diabetes stay in hospital for longer than those without diabetes for similar conditions. Clinical coding is poor across all specialties. Inpatients with diabetes often have unrecognized foot problems. We wanted to look at the relationships between these factors. A single day audit, looking at the prevalence of diabetes in all adult inpatients. Also looking at their feet to find out how many were high-risk or had existing problems. A 998-bed university teaching hospital. All adult inpatients. (a) To see if patients with diabetes and foot problems were in hospital for longer than the national average length of stay compared with national data; (b) to see if there were people in hospital with acute foot problems who were not known to the specialist diabetic foot team; and (c) to assess the accuracy of clinical coding. We identified 110 people with diabetes. However, discharge coding data for inpatients on that day showed 119 people with diabetes. Length of stay (LOS) was substantially higher for those with diabetes compared to those without (± SD) at 22.39 (22.26) days, vs. 11.68 (6.46) (P coding was poor with some people who had been identified as having diabetes on the audit, who were not coded as such on discharge. Clinical coding - which is dependent on discharge summaries - poorly reflects diagnoses. Additionally, length of stay is significantly longer than previous estimates. The discrepancy between coding and diagnosis needs addressing by increasing the levels of awareness and education of coders and physicians. We suggest that our data be used by healthcare planners when deciding on future tariffs.

  3. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    Science.gov (United States)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high

  4. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  5. Joint variable frame rate and length analysis for speech recognition under adverse conditions

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Kraljevski, Ivan

    2014-01-01

    This paper presents a method that combines variable frame length and rate analysis for speech recognition in noisy environments, together with an investigation of the effect of different frame lengths on speech recognition performance. The method adopts frame selection using an a posteriori signal......-to-noise (SNR) ratio weighted energy distance and increases the length of the selected frames, according to the number of non-selected preceding frames. It assigns a higher frame rate and a normal frame length to a rapidly changing and high SNR region of a speech signal, and a lower frame rate and an increased...... frame length to a steady or low SNR region. The speech recognition results show that the proposed variable frame rate and length method outperforms fixed frame rate and length analysis, as well as standalone variable frame rate analysis in terms of noise-robustness....

  6. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  7. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  8. Design of variable-weight quadratic congruence code for optical CDMA

    Science.gov (United States)

    Feng, Gang; Cheng, Wen-Qing; Chen, Fu-Jun

    2015-09-01

    A variable-weight code family referred to as variable-weight quadratic congruence code (VWQCC) is constructed by algebraic transformation for incoherent synchronous optical code division multiple access (OCDMA) systems. Compared with quadratic congruence code (QCC), VWQCC doubles the code cardinality and provides the multiple code-sets with variable code-weight. Moreover, the bit-error rate (BER) performance of VWQCC is superior to those of conventional variable-weight codes by removing or padding pulses under the same chip power assumption. The experiment results show that VWQCC can be well applied to the OCDMA with quality of service (QoS) requirements.

  9. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  10. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  11. The effect of word length and other sublexical, lexical, and semantic variables on developmental reading deficits.

    Science.gov (United States)

    De Luca, Maria; Barca, Laura; Burani, Cristina; Zoccolotti, Pierluigi

    2008-12-01

    To examine the effect of word length and several sublexical, and lexico-semantic variables on the reading of Italian children with a developmental reading deficit. Previous studies indicated the role of word length in transparent orthographies. However, several factors that may interact with word length were not controlled for. Seventeen impaired and 34 skilled sixth-grade readers were presented words of different lengths, matched for initial phoneme, bigram frequency, word frequency, age of acquisition, and imageability. Participants were asked to read aloud, as quickly and as accurately as possible. Reaction times at the onset of pronunciation and mispronunciations were recorded. Impaired readers' reaction times indicated a marked effect of word length; in skilled readers, there was no length effect for short words but, rather, a monotonic increase from 6-letter words on. Regression analyses confirmed the role of word length and indicated the influence of word frequency (similar in impaired and skilled readers). No other variables predicted reading latencies. Word length differentially influenced word recognition in impaired versus skilled readers, irrespective of the action of (potentially interfering) sublexical, lexical, and semantic variables. It is proposed that the locus of the length effect is at a perceptual level of analysis. The independent influence of word frequency on the reading performance of both groups of participants indicates the sparing of lexical activation in impaired readers.

  12. P-Link: A method for generating multicomponent cytochrome P450 fusions with variable linker length

    DEFF Research Database (Denmark)

    Belsare, Ketaki D.; Ruff, Anna Joelle; Martinez, Ronny

    2014-01-01

    Fusion protein construction is a widely employed biochemical technique, especially when it comes to multi-component enzymes such as cytochrome P450s. Here we describe a novel method for generating fusion proteins with variable linker lengths, protein fusion with variable linker insertion (P...

  13. Select injury-related variables are affected by stride length and foot strike style during running.

    Science.gov (United States)

    Boyer, Elizabeth R; Derrick, Timothy R

    2015-09-01

    Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables

  14. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    International Nuclear Information System (INIS)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.; Razalli, Mohammad Shahrazel; Kadhim, Rasim A.

    2015-01-01

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed an investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction

  15. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    Energy Technology Data Exchange (ETDEWEB)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.; Razalli, Mohammad Shahrazel; Kadhim, Rasim A. [School of Computer and Communication Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis (Malaysia)

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed an investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.

  16. Variability and trends in dry day frequency and dry event length in the southwestern United States

    Science.gov (United States)

    McCabe, Gregory J.; Legates, David R.; Lins, Harry F.

    2010-01-01

    Daily precipitation from 22 National Weather Service first-order weather stations in the southwestern United States for water years 1951 through 2006 are used to examine variability and trends in the frequency of dry days and dry event length. Dry events with minimum thresholds of 10 and 20 consecutive days of precipitation with less than 2.54 mm are analyzed. For water years and cool seasons (October through March), most sites indicate negative trends in dry event length (i.e., dry event durations are becoming shorter). For the warm season (April through September), most sites also indicate negative trends; however, more sites indicate positive trends in dry event length for the warm season than for water years or cool seasons. The larger number of sites indicating positive trends in dry event length during the warm season is due to a series of dry warm seasons near the end of the 20th century and the beginning of the 21st century. Overall, a large portion of the variability in dry event length is attributable to variability of the El Niño–Southern Oscillation, especially for water years and cool seasons. Our results are consistent with analyses of trends in discharge for sites in the southwestern United States, an increased frequency in El Niño events, and positive trends in precipitation in the southwestern United States.

  17. Analysis of visual coding variables on CRT generated displays

    International Nuclear Information System (INIS)

    Blackman, H.S.; Gilmore, W.E.

    1985-01-01

    Cathode ray tube generated safety parameter display systems in a nuclear power plant control room situation have been found to be improved in effectiveness when color coding is employed. Research has indicated strong support for graphic coding techniques particularly in redundant coding schemes. In addition, findings on pictographs, as applied in coding schemes, indicate the need for careful application and for further research in the development of a standardized set of symbols

  18. Research On Variable-Length Transfer Delay and Delayed Signal Cancellation Based PLLs

    DEFF Research Database (Denmark)

    Golestan, Saeed; Guerrero, Josep M.; Quintero, Juan Carlos Vasquez

    2018-01-01

    large frequency drifts are anticipated and a high accuracy is required. To the best of authors' knowledge, the small-signal modeling of a variable-length delay-based PLL has not yet been conducted. The main aim of this paper is to cover this gap. The tuning procedure and analysis of these PLLs...

  19. Improved theory of time domain reflectometry with variable coaxial cable length for electrical conductivity measurements

    Science.gov (United States)

    Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...

  20. Design of a new SI engine intake manifold with variable length plenum

    International Nuclear Information System (INIS)

    Ceviz, M.A.; Akin, M.

    2010-01-01

    This paper investigates the effects of intake plenum length/volume on the performance characteristics of a spark-ignited engine with electronically controlled fuel injectors. Previous work was carried out mainly on the engine with carburetor producing a mixture desirable for combustion and dispatching the mixture to the intake manifold. The more stringent emission legislations have driven engine development towards concepts based on electronic-controlled fuel injection rather than the use of carburetors. In the engine with multipoint fuel injection system using electronically controlled fuel injectors has an intake manifold in which only the air flows and, the fuel is injected onto the intake valve. Since the intake manifolds transport mainly air, the supercharging effects of the variable length intake plenum will be different from carbureted engine. Engine tests have been carried out with the aim of constituting a base study to design a new variable length intake manifold plenum. Engine performance characteristics such as brake torque, brake power, thermal efficiency and specific fuel consumption were taken into consideration to evaluate the effects of the variation in the length of intake plenum. The results showed that the variation in the plenum length causes an improvement on the engine performance characteristics especially on the fuel consumption at high load and low engine speeds which are put forward the system using for urban roads. According to the test results, plenum length must be extended for low engine speeds and shortened as the engine speed increases. A system taking into account the results of the study was developed to adjust the intake plenum length.

  1. A 7MeV S-Band 2998MHz Variable Pulse Length Linear Accelerator System

    CERN Document Server

    Hernandez, Michael; Mishin, Andrey V; Saverskiy, Aleksandr J; Skowbo, Dave; Smith, Richard

    2005-01-01

    American Science and Engineering High Energy Systems Division (AS&E HESD) has designed and commissioned a variable pulse length 7 MeV electron accelerator system. The system is capable of delivering a 7 MeV electron beam with a pulse length of 10 nS FWHM and a peak current of 1 ampere. The system can also produce electron pulses with lengths of 20, 50, 100, 200, 400 nS and 3 uS FWHM with corresponding lower peak currents. The accelerator system consists of a gridded electron gun, focusing coil, an electrostatic deflector system, Helmholtz coils, a standing wave side coupled S-band linac, a 2.6 MW peak power magnetron, an RF circulator, a fast toroid, vacuum system and a PLC/PC control system. The system has been operated at repetition rates up to 250pps. The design, simulations and experimental results from the accelerator system are presented in this paper.

  2. Variability in word reading performance of dyslexic readers: effects of letter length, phoneme length and digraph presence

    NARCIS (Netherlands)

    Marinus, E.; de Jong, P.F.

    2010-01-01

    The marked word-length effect in dyslexic children suggests the use of a letter-by-letter reading strategy. Such a strategy should make it more difficult to infer the sound of digraphs. Our main aim was to disentangle length and digraph-presence effects in word and pseudoword reading. In addition,

  3. Comparisons between Arabidopsis thaliana and Drosophila melanogaster in relation to Coding and Noncoding Sequence Length and Gene Expression

    Directory of Open Access Journals (Sweden)

    Rachel Caldwell

    2015-01-01

    Full Text Available There is a continuing interest in the analysis of gene architecture and gene expression to determine the relationship that may exist. Advances in high-quality sequencing technologies and large-scale resource datasets have increased the understanding of relationships and cross-referencing of expression data to the large genome data. Although a negative correlation between expression level and gene (especially transcript length has been generally accepted, there have been some conflicting results arising from the literature concerning the impacts of different regions of genes, and the underlying reason is not well understood. The research aims to apply quantile regression techniques for statistical analysis of coding and noncoding sequence length and gene expression data in the plant, Arabidopsis thaliana, and fruit fly, Drosophila melanogaster, to determine if a relationship exists and if there is any variation or similarities between these species. The quantile regression analysis found that the coding sequence length and gene expression correlations varied, and similarities emerged for the noncoding sequence length (5′ and 3′ UTRs between animal and plant species. In conclusion, the information described in this study provides the basis for further exploration into gene regulation with regard to coding and noncoding sequence length.

  4. Phenotypic and genotypic variability of disc flower corolla length and nectar content in sunflower

    Directory of Open Access Journals (Sweden)

    Joksimović Jovan

    2003-01-01

    Full Text Available The nectar content and disc flower corolla length are the two most important parameters of attractiveness to pollinators in sunflower. The phenotypic and genotypic variability of these two traits was studied in four commercially important hybrids and their parental components in a trial with three fertilizer doses over two years. The results showed that, looking at individual genotypes, the variability of disc flower corolla length was affected the most by year (85.38-97.46%. As the study years were extremely different, the phenotypic variance of the hybrids and parental components was calculated for each year separately. In such conditions, looking at all of the crossing combinations, the largest contribution to phenotypic variance of the corolla length was that of genotype: 57.27-61.11% (NS-H-45 64.51-84.84% (Velja; 96.74-97.20% (NS-H-702 and 13.92-73.17% (NS-H-111. A similar situation was observed for the phenotypic variability of nectar content, where genotype also had the largest influence, namely 39.77-48.25% in NS-H-45; 39.06-42.51% in Velja; 31.97-72.36% in NS-H-702; and 62.13-94.96% in NS-H-111.

  5. An RNA-Seq strategy to detect the complete coding and non-coding transcriptome including full-length imprinted macro ncRNAs.

    Directory of Open Access Journals (Sweden)

    Ru Huang

    Full Text Available Imprinted macro non-protein-coding (nc RNAs are cis-repressor transcripts that silence multiple genes in at least three imprinted gene clusters in the mouse genome. Similar macro or long ncRNAs are abundant in the mammalian genome. Here we present the full coding and non-coding transcriptome of two mouse tissues: differentiated ES cells and fetal head using an optimized RNA-Seq strategy. The data produced is highly reproducible in different sequencing locations and is able to detect the full length of imprinted macro ncRNAs such as Airn and Kcnq1ot1, whose length ranges between 80-118 kb. Transcripts show a more uniform read coverage when RNA is fragmented with RNA hydrolysis compared with cDNA fragmentation by shearing. Irrespective of the fragmentation method, all coding and non-coding transcripts longer than 8 kb show a gradual loss of sequencing tags towards the 3' end. Comparisons to published RNA-Seq datasets show that the strategy presented here is more efficient in detecting known functional imprinted macro ncRNAs and also indicate that standardization of RNA preparation protocols would increase the comparability of the transcriptome between different RNA-Seq datasets.

  6. Paraxial design of an optical element with variable focal length and fixed position of principal planes.

    Science.gov (United States)

    Mikš, Antonín; Novák, Pavel

    2018-05-10

    In this article, we analyze the problem of the paraxial design of an active optical element with variable focal length, which maintains the positions of its principal planes fixed during the change of its optical power. Such optical elements are important in the process of design of complex optical systems (e.g., zoom systems), where the fixed position of principal planes during the change of optical power is essential for the design process. The proposed solution is based on the generalized membrane tunable-focus fluidic lens with several membrane surfaces.

  7. Validation of favor code linear elastic fracture solutions for finite-length flaw geometries

    International Nuclear Information System (INIS)

    Dickson, T.L.; Keeney, J.A.; Bryson, J.W.

    1995-01-01

    One of the current tasks within the US Nuclear Regulatory Commission (NRC)-funded Heavy Section Steel Technology Program (HSST) at Oak Ridge National Laboratory (ORNL) is the continuing development of the FAVOR (Fracture, analysis of Vessels: Oak Ridge) computer code. FAVOR performs structural integrity analyses of embrittled nuclear reactor pressure vessels (RPVs) with stainless steel cladding, to evaluate compliance with the applicable regulatory criteria. Since the initial release of FAVOR, the HSST program has continued to enhance the capabilities of the FAVOR code. ABAQUS, a nuclear quality assurance certified (NQA-1) general multidimensional finite element code with fracture mechanics capabilities, was used to generate a database of stress-intensity-factor influence coefficients (SIFICs) for a range of axially and circumferentially oriented semielliptical inner-surface flaw geometries applicable to RPVs with an internal radius (Ri) to wall thickness (w) ratio of 10. This database of SIRCs has been incorporated into a development version of FAVOR, providing it with the capability to perform deterministic and probabilistic fracture analyses of RPVs subjected to transients, such as pressurized thermal shock (PTS), for various flaw geometries. This paper discusses the SIFIC database, comparisons with other investigators, and some of the benchmark verification problem specifications and solutions

  8. Variable RF capacitor based on a-Si:H (P-doped) multi-length cantilevers

    International Nuclear Information System (INIS)

    Fu, Y Q; Milne, S B; Luo, J K; Flewitt, A J; Wang, L; Miao, J M; Milne, W I

    2006-01-01

    A variable RF capacitor with a-Si:H (doped with phosphine) cantilevers as the top electrode were designed and fabricated. Because the top multi-cantilever electrodes have different lengths, increasing the applied voltage pulled down the cantilever beams sequentially, thus realizing a gradual increase of the capacitance with the applied voltage. A high-k material, H f O 2 , was used as an insulating layer to increase the tuning range of the capacitance. The measured capacitance from the fabricated capacitor was much lower and the pull-in voltage was much higher than those from theoretical analysis because of incomplete contact of the two electrodes, existence of film differential stresses and charge injection effect. Increase of sweeping voltage rate could significantly shift the pull-in voltage to higher values due to the charge injection mechanisms

  9. Construction and performance analysis of variable-weight optical orthogonal codes for asynchronous OCDMA systems

    Science.gov (United States)

    Li, Chuan-qi; Yang, Meng-jie; Zhang, Xiu-rong; Chen, Mei-juan; He, Dong-dong; Fan, Qing-bin

    2014-07-01

    A construction scheme of variable-weight optical orthogonal codes (VW-OOCs) for asynchronous optical code division multiple access (OCDMA) system is proposed. According to the actual situation, the code family can be obtained by programming in Matlab with the given code weight and corresponding capacity. The formula of bit error rate (BER) is derived by taking account of the effects of shot noise, avalanche photodiode (APD) bulk, thermal noise and surface leakage currents. The OCDMA system with the VW-OOCs is designed and improved. The study shows that the VW-OOCs have excellent performance of BER. Despite of coming from the same code family or not, the codes with larger weight have lower BER compared with the other codes in the same conditions. By taking simulation, the conclusion is consistent with the analysis of BER in theory. And the ideal eye diagrams are obtained by the optical hard limiter.

  10. Best Hiding Capacity Scheme for Variable Length Messages Using Particle Swarm Optimization

    Science.gov (United States)

    Bajaj, Ruchika; Bedi, Punam; Pal, S. K.

    Steganography is an art of hiding information in such a way that prevents the detection of hidden messages. Besides security of data, the quantity of data that can be hidden in a single cover medium, is also very important. We present a secure data hiding scheme with high embedding capacity for messages of variable length based on Particle Swarm Optimization. This technique gives the best pixel positions in the cover image, which can be used to hide the secret data. In the proposed scheme, k bits of the secret message are substituted into k least significant bits of the image pixel, where k varies from 1 to 4 depending on the message length. The proposed scheme is tested and results compared with simple LSB substitution, uniform 4-bit LSB hiding (with PSO) for the test images Nature, Baboon, Lena and Kitty. The experimental study confirms that the proposed method achieves high data hiding capacity and maintains imperceptibility and minimizes the distortion between the cover image and the obtained stego image.

  11. A Study of Nonlinear Variable Viscosity in Finite-Length Tube with Peristalsis

    Directory of Open Access Journals (Sweden)

    Y. Abd Elmaboud

    2014-01-01

    Full Text Available Peristaltic motion of an incompressible Newtonian fluid with variable viscosity induced by periodic sinusoidal traveling wave propagating along the walls of a finite-length tube has been investigated. A perturbation method of solution is sought. The viscosity parameter α (α << 1 is chosen as a perturbation parameter and the governing equations are developed up to the first-order in the viscosity parameter (α. The analytical solution has been derived for the radial velocity at the tube wall, the axial pressure gradient across the length of the tube, and the wall shear stress under the assumption of low Reynolds number and long wavelength approximation. The impacts of physical parameters such as the viscosity and the parameter determining the shape of the constriction on the pressure distribution and on the wall shear stress for integral and non-integral number of waves are illustrated. The main conclusion that can be drawn out of this study is that the peaks of pressure fluctuate with time and attain different values with non-integral numbers of peristaltic waves. The considered problem is very applicable in study of biological flow and industrial flow.

  12. Highly variable aerodynamic roughness length (z0) for a hummocky debris-covered glacier

    Science.gov (United States)

    Miles, Evan S.; Steiner, Jakob F.; Brun, Fanny

    2017-08-01

    The aerodynamic roughness length (z0) is an essential parameter in surface energy balance studies, but few literature values exist for debris-covered glaciers. We use microtopographic and aerodynamic methods to assess the spatial variability of z0 for Lirung Glacier, Nepal. We apply structure from motion to produce digital elevation models for three nested domains: five 1 m2 plots, a 21,300 m2 surface depression, and the lower 550,000 m2 of the debris-mantled tongue. Wind and temperature sensor towers were installed in the vicinity of the plots within the surface depression in October 2014. We calculate z0 according to a variety of transect-based microtopographic parameterizations for each plot, then develop a grid version of the algorithms by aggregating data from all transects. This grid approach is applied to the surface depression digital elevation model to characterize z0 spatial variability. The algorithms reproduce the same variability among transects and plots, but z0 estimates vary by an order of magnitude between algorithms. Across the study depression, results from different algorithms are strongly correlated. Using Monin-Obukov similarity theory, we derive z0 values from the meteorological data. Using different stability criteria, we derive median values of z0 between 0.03 m and 0.05 m, but with considerable uncertainty due to the glacier's complex topography. Considering estimates from these algorithms, results suggest that z0 varies across Lirung Glacier between ˜0.005 m (gravels) to ˜0.5 m (boulders). Future efforts should assess the importance of such variable z0 values in a distributed energy balance model.

  13. Optimizing x-ray mirror thermal performance using variable length cooling for second generation FELs

    Science.gov (United States)

    Hardin, Corey L.; Srinivasan, Venkat N.; Amores, Lope; Kelez, Nicholas M.; Morton, Daniel S.; Stefan, Peter M.; Nicolas, Josep; Zhang, Lin; Cocco, Daniele

    2016-09-01

    The success of the LCLS led to an interest across a number of disciplines in the scientific community including physics, chemistry, biology, and material science. Fueled by this success, SLAC National Accelerator Laboratory is developing a new high repetition rate free electron laser, LCLS-II, a superconducting linear accelerator capable of a repetition rate up to 1 MHz. Undulators will be optimized for 200 to 1300 eV soft X-rays, and for 1000 to 5000 eV hard X-rays. To absorb spontaneous radiation, higher harmonic energies and deflect the x-ray beam to various end stations, the transport and diagnostics system includes grazing incidence plane mirrors on both the soft and Hard X-ray beamline. To deliver the FEL beam with minimal power loss and wavefront distortion, we need mirrors of height errors below 1nm rms in operational conditions. We need to mitigate the thermal load effects due to the high repetition rate. The absorbed thermal profile is highly dependent on the beam divergence, and this is a function of the photon energy. To address this complexity, we developed a mirror cradle with variable length cooling and first order curve correction. Mirror figure error is minimized using variable length water-cooling through a gallium-indium eutectic bath. Curve correction is achieved with an off-axis bender that will be described in details. We present the design features, mechanical analysis and results from optical and mechanical tests of a prototype assembly, with particular regards to the figure sensitivity to bender corrections.

  14. Analysis of the land surface heterogeneity and its impact on atmospheric variables and the aerodynamic and thermodynamic roughness lengths

    NARCIS (Netherlands)

    Ma, Y.M.; Menenti, M.; Feddes, R.A.; Wang, J.M.

    2008-01-01

    The land surface heterogeneity has a very significant impact on atmospheric variables (air temperature T-a, wind speed u, and humidity q), the aerodynamic roughness length z(0m), thermodynamic roughness length z(0h), and the excess resistance to heat transfer kB(-1). First, in this study the land

  15. In vitro cytotoxicity of Manville Code 100 glass fibers: Effect of fiber length on human alveolar macrophages

    Directory of Open Access Journals (Sweden)

    Jones William

    2006-03-01

    Full Text Available Abstract Background Synthetic vitreous fibers (SVFs are inorganic noncrystalline materials widely used in residential and industrial settings for insulation, filtration, and reinforcement purposes. SVFs conventionally include three major categories: fibrous glass, rock/slag/stone (mineral wool, and ceramic fibers. Previous in vitro studies from our laboratory demonstrated length-dependent cytotoxic effects of glass fibers on rat alveolar macrophages which were possibly associated with incomplete phagocytosis of fibers ≥ 17 μm in length. The purpose of this study was to examine the influence of fiber length on primary human alveolar macrophages, which are larger in diameter than rat macrophages, using length-classified Manville Code 100 glass fibers (8, 10, 16, and 20 μm. It was hypothesized that complete engulfment of fibers by human alveolar macrophages could decrease fiber cytotoxicity; i.e. shorter fibers that can be completely engulfed might not be as cytotoxic as longer fibers. Human alveolar macrophages, obtained by segmental bronchoalveolar lavage of healthy, non-smoking volunteers, were treated with three different concentrations (determined by fiber number of the sized fibers in vitro. Cytotoxicity was assessed by monitoring cytosolic lactate dehydrogenase release and loss of function as indicated by a decrease in zymosan-stimulated chemiluminescence. Results Microscopic analysis indicated that human alveolar macrophages completely engulfed glass fibers of the 20 μm length. All fiber length fractions tested exhibited equal cytotoxicity on a per fiber basis, i.e. increasing lactate dehydrogenase and decreasing chemiluminescence in the same concentration-dependent fashion. Conclusion The data suggest that due to the larger diameter of human alveolar macrophages, compared to rat alveolar macrophages, complete phagocytosis of longer fibers can occur with the human cells. Neither incomplete phagocytosis nor length-dependent toxicity was

  16. An integrated PCR colony hybridization approach to screen cDNA libraries for full-length coding sequences.

    Science.gov (United States)

    Pollier, Jacob; González-Guzmán, Miguel; Ardiles-Diaz, Wilson; Geelen, Danny; Goossens, Alain

    2011-01-01

    cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) is a commonly used technique for genome-wide expression analysis that does not require prior sequence knowledge. Typically, quantitative expression data and sequence information are obtained for a large number of differentially expressed gene tags. However, most of the gene tags do not correspond to full-length (FL) coding sequences, which is a prerequisite for subsequent functional analysis. A medium-throughput screening strategy, based on integration of polymerase chain reaction (PCR) and colony hybridization, was developed that allows in parallel screening of a cDNA library for FL clones corresponding to incomplete cDNAs. The method was applied to screen for the FL open reading frames of a selection of 163 cDNA-AFLP tags from three different medicinal plants, leading to the identification of 109 (67%) FL clones. Furthermore, the protocol allows for the use of multiple probes in a single hybridization event, thus significantly increasing the throughput when screening for rare transcripts. The presented strategy offers an efficient method for the conversion of incomplete expressed sequence tags (ESTs), such as cDNA-AFLP tags, to FL-coding sequences.

  17. DNA fingerprinting of Mycobacterium leprae strains using variable number tandem repeat (VNTR) - fragment length analysis (FLA).

    Science.gov (United States)

    Jensen, Ronald W; Rivest, Jason; Li, Wei; Vissa, Varalakshmi

    2011-07-15

    The study of the transmission of leprosy is particularly difficult since the causative agent, Mycobacterium leprae, cannot be cultured in the laboratory. The only sources of the bacteria are leprosy patients, and experimentally infected armadillos and nude mice. Thus, many of the methods used in modern epidemiology are not available for the study of leprosy. Despite an extensive global drug treatment program for leprosy implemented by the WHO, leprosy remains endemic in many countries with approximately 250,000 new cases each year. The entire M. leprae genome has been mapped and many loci have been identified that have repeated segments of 2 or more base pairs (called micro- and minisatellites). Clinical strains of M. leprae may vary in the number of tandem repeated segments (short tandem repeats, STR) at many of these loci. Variable number tandem repeat (VNTR) analysis has been used to distinguish different strains of the leprosy bacilli. Some of the loci appear to be more stable than others, showing less variation in repeat numbers, while others seem to change more rapidly, sometimes in the same patient. While the variability of certain VNTRs has brought up questions regarding their suitability for strain typing, the emerging data suggest that analyzing multiple loci, which are diverse in their stability, can be used as a valuable epidemiological tool. Multiple locus VNTR analysis (MLVA) has been used to study leprosy evolution and transmission in several countries including China, Malawi, the Philippines, and Brazil. MLVA involves multiple steps. First, bacterial DNA is extracted along with host tissue DNA from clinical biopsies or slit skin smears (SSS). The desired loci are then amplified from the extracted DNA via polymerase chain reaction (PCR). Fluorescently-labeled primers for 4-5 different loci are used per reaction, with 18 loci being amplified in a total of four reactions. The PCR products may be subjected to agarose gel electrophoresis to verify the

  18. Alignment-free Transcriptomic and Metatranscriptomic Comparison Using Sequencing Signatures with Variable Length Markov Chains.

    Science.gov (United States)

    Liao, Weinan; Ren, Jie; Wang, Kun; Wang, Shun; Zeng, Feng; Wang, Ying; Sun, Fengzhu

    2016-11-23

    The comparison between microbial sequencing data is critical to understand the dynamics of microbial communities. The alignment-based tools analyzing metagenomic datasets require reference sequences and read alignments. The available alignment-free dissimilarity approaches model the background sequences with Fixed Order Markov Chain (FOMC) yielding promising results for the comparison of microbial communities. However, in FOMC, the number of parameters grows exponentially with the increase of the order of Markov Chain (MC). Under a fixed high order of MC, the parameters might not be accurately estimated owing to the limitation of sequencing depth. In our study, we investigate an alternative to FOMC to model background sequences with the data-driven Variable Length Markov Chain (VLMC) in metatranscriptomic data. The VLMC originally designed for long sequences was extended to apply to high-throughput sequencing reads and the strategies to estimate the corresponding parameters were developed. The flexible number of parameters in VLMC avoids estimating the vast number of parameters of high-order MC under limited sequencing depth. Different from the manual selection in FOMC, VLMC determines the MC order adaptively. Several beta diversity measures based on VLMC were applied to compare the bacterial RNA-Seq and metatranscriptomic datasets. Experiments show that VLMC outperforms FOMC to model the background sequences in transcriptomic and metatranscriptomic samples. A software pipeline is available at https://d2vlmc.codeplex.com.

  19. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  20. 2D hydrodynamic simulations of a variable length gas target for density down-ramp injection of electrons into a laser wakefield accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Kononenko, O., E-mail: olena.kononenko@desy.de [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Lopes, N.C.; Cole, J.M.; Kamperidis, C.; Mangles, S.P.D.; Najmudin, Z. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Osterhoff, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Poder, K. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Rusby, D.; Symes, D.R. [Central Laser Facility, STFC Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX (United Kingdom); Warwick, J. [Queens University Belfast, North Ireland (United Kingdom); Wood, J.C. [The John Adams Institute for Accelerator Science, The Blackett Laboratory, Imperial College London, SW7 2BZ UK (United Kingdom); Palmer, C.A.J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)

    2016-09-01

    In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.

  1. Performance and emission characteristics of LPG powered four stroke SI engine under variable stroke length and compression ratio

    International Nuclear Information System (INIS)

    Ozcan, Hakan; Yamin, Jehad A.A.

    2008-01-01

    A computer simulation of a variable stroke length, LPG fuelled, four stroke, single cylinder, water cooled spark ignition engine was done. The engine capacity was varied by varying the stroke length of the engine, which also changed its compression ratio. The simulation model developed was verified with experimental results from the literature for both constant and variable stroke engines. The performance of the engine was simulated at each stroke length/compression ratio combination. The simulation results clearly indicate the advantages and utility of variable stroke engines in fuel economy and power issues. Using the variable stroke technique has significantly improved the engine's performance and emission characteristics within the range studied. The brake torque and power have registered an increase of about 7-54% at low speed and 7-57% at high speed relative to the original engine design and for all stroke lengths and engine speeds studied. The brake specific fuel consumption has registered variations from a reduction of about 6% to an increase of about 3% at low speed and from a reduction of about 6% to an increase of about 8% at high speed relative to the original engine design and for all stroke lengths and engine speeds studied. On the other hand, an increase of pollutants of about 0.65-2% occurred at low speed. Larger stroke lengths resulted in a reduction of the pollutants level of about 1.5% at higher speeds. At lower stroke lengths, on the other hand, an increase of about 2% occurred. Larger stroke lengths resulted in increased exhaust temperature and, hence, make the exhaust valve work under high temperature

  2. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  3. Topology optimization of a flexible multibody system with variable-length bodies described by ALE–ANCF

    DEFF Research Database (Denmark)

    Sun, Jialiang; Tian, Qiang; Hu, Haiyan

    2018-01-01

    Recent years have witnessed the application of topology optimization to flexible multibody systems (FMBS) so as to enhance their dynamic performances. In this study, an explicit topology optimization approach is proposed for an FMBS with variable-length bodies via the moving morphable components...... (MMC). Using the arbitrary Lagrangian–Eulerian (ALE) formulation, the thin plate elements of the absolute nodal coordinate formulation (ANCF) are used to describe the platelike bodies with variable length. For the thin plate element of ALE–ANCF, the elastic force and additional inertial force, as well...

  4. Isolation and characterization of full-length cDNA clones coding for cholinesterase from fetal human tissues

    International Nuclear Information System (INIS)

    Prody, C.A.; Zevin-Sonkin, D.; Gnatt, A.; Goldberg, O.; Soreq, H.

    1987-01-01

    To study the primary structure and regulation of human cholinesterases, oligodeoxynucleotide probes were prepared according to a consensus peptide sequence present in the active site of both human serum pseudocholinesterase and Torpedo electric organ true acetylcholinesterase. Using these probes, the authors isolated several cDNA clones from λgt10 libraries of fetal brain and liver origins. These include 2.4-kilobase cDNA clones that code for a polypeptide containing a putative signal peptide and the N-terminal, active site, and C-terminal peptides of human BtChoEase, suggesting that they code either for BtChoEase itself or for a very similar but distinct fetal form of cholinesterase. In RNA blots of poly(A) + RNA from the cholinesterase-producing fetal brain and liver, these cDNAs hybridized with a single 2.5-kilobase band. Blot hybridization to human genomic DNA revealed that these fetal BtChoEase cDNA clones hybridize with DNA fragments of the total length of 17.5 kilobases, and signal intensities indicated that these sequences are not present in many copies. Both the cDNA-encoded protein and its nucleotide sequence display striking homology to parallel sequences published for Torpedo AcChoEase. These finding demonstrate extensive homologies between the fetal BtChoEase encoded by these clones and other cholinesterases of various forms and species

  5. Importance of Viral Sequence Length and Number of Variable and Informative Sites in Analysis of HIV Clustering.

    Science.gov (United States)

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor; Essex, M

    2015-05-01

    To improve the methodology of HIV cluster analysis, we addressed how analysis of HIV clustering is associated with parameters that can affect the outcome of viral clustering. The extent of HIV clustering and tree certainty was compared between 401 HIV-1C near full-length genome sequences and subgenomic regions retrieved from the LANL HIV Database. Sliding window analysis was based on 99 windows of 1,000 bp and 45 windows of 2,000 bp. Potential associations between the extent of HIV clustering and sequence length and the number of variable and informative sites were evaluated. The near full-length genome HIV sequences showed the highest extent of HIV clustering and the highest tree certainty. At the bootstrap threshold of 0.80 in maximum likelihood (ML) analysis, 58.9% of near full-length HIV-1C sequences but only 15.5% of partial pol sequences (ViroSeq) were found in clusters. Among HIV-1 structural genes, pol showed the highest extent of clustering (38.9% at a bootstrap threshold of 0.80), although it was significantly lower than in the near full-length genome sequences. The extent of HIV clustering was significantly higher for sliding windows of 2,000 bp than 1,000 bp. We found a strong association between the sequence length and proportion of HIV sequences in clusters, and a moderate association between the number of variable and informative sites and the proportion of HIV sequences in clusters. In HIV cluster analysis, the extent of detectable HIV clustering is directly associated with the length of viral sequences used, as well as the number of variable and informative sites. Near full-length genome sequences could provide the most informative HIV cluster analysis. Selected subgenomic regions with a high extent of HIV clustering and high tree certainty could also be considered as a second choice.

  6. Reliability and short-term intra-individual variability of telomere length measurement using monochrome multiplexing quantitative PCR.

    Directory of Open Access Journals (Sweden)

    Sangmi Kim

    Full Text Available Studies examining the association between telomere length and cancer risk have often relied on measurement of telomere length from a single blood draw using a real-time PCR technique. We examined the reliability of telomere length measurement using sequential samples collected over a 9-month period.Relative telomere length in peripheral blood was estimated using a single tube monochrome multiplex quantitative PCR assay in blood DNA samples from 27 non-pregnant adult women (aged 35 to 74 years collected in 7 visits over a 9-month period. A linear mixed model was used to estimate the components of variance for telomere length measurements attributed to variation among women and variation between time points within women. Mean telomere length measurement at any single visit was not significantly different from the average of 7 visits. Plates had a significant systematic influence on telomere length measurements, although measurements between different plates were highly correlated. After controlling for plate effects, 64% of the remaining variance was estimated to be accounted for by variance due to subject. Variance explained by time of visit within a subject was minor, contributing 5% of the remaining variance.Our data demonstrate good short-term reliability of telomere length measurement using blood from a single draw. However, the existence of technical variability, particularly plate effects, reinforces the need for technical replicates and balancing of case and control samples across plates.

  7. How mechanical context and feedback jointly determine the use of mechanical variables in length perception by dynamic touch

    NARCIS (Netherlands)

    Menger, Rudmer; Withagen, Rob

    Earlier studies have revealed that both mechanical context and feedback determine what mechanical invariant is used to perceive length by dynamic touch. In the present article, the authors examined how these two factors jointly constrain the informational variable that is relied upon. Participants

  8. How mechanical context and feedback jointly determine the use of mechanical variables in length perception by dynamic touch

    NARCIS (Netherlands)

    Menger, Rudmer; Withagen, Rob

    2009-01-01

    Earlier studies have revealed that both mechanical context and feedback determine what mechanical invariant is used to perceive length by dynamic touch. In the present article, the authors examined how these two factors jointly constrain the informational variable that is relied upon. Participants

  9. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    Science.gov (United States)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  10. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  11. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A

  12. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  13. Do climate variables and human density affect Achatina fulica (Bowditch) (Gastropoda: Pulmonata) shell length, total weight and condition factor?

    Science.gov (United States)

    Albuquerque, F S; Peso-Aguiar, M C; Assunção-Albuquerque, M J T; Gálvez, L

    2009-08-01

    The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm). The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.

  14. Do climate variables and human density affect Achatina fulica (Bowditch (Gastropoda: Pulmonata shell length, total weight and condition factor?

    Directory of Open Access Journals (Sweden)

    FS. Albuquerque

    Full Text Available The length-weight relationship and condition factor have been broadly investigated in snails to obtain the index of physical condition of populations and evaluate habitat quality. Herein, our goal was to describe the best predictors that explain Achatina fulica biometrical parameters and well being in a recently introduced population. From November 2001 to November 2002, monthly snail samples were collected in Lauro de Freitas City, Bahia, Brazil. Shell length and total weight were measured in the laboratory and the potential curve and condition factor were calculated. Five environmental variables were considered: temperature range, mean temperature, humidity, precipitation and human density. Multiple regressions were used to generate models including multiple predictors, via model selection approach, and then ranked with AIC criteria. Partial regressions were used to obtain the separated coefficients of determination of climate and human density models. A total of 1.460 individuals were collected, presenting a shell length range between 4.8 to 102.5 mm (mean: 42.18 mm. The relationship between total length and total weight revealed that Achatina fulica presented a negative allometric growth. Simple regression indicated that humidity has a significant influence on A. fulica total length and weight. Temperature range was the main variable that influenced the condition factor. Multiple regressions showed that climatic and human variables explain a small proportion of the variance in shell length and total weight, but may explain up to 55.7% of the condition factor variance. Consequently, we believe that the well being and biometric parameters of A. fulica can be influenced by climatic and human density factors.

  15. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    Science.gov (United States)

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  16. Torsion of the bar of the round transverse section from the variable on length and the transverse section porosity

    Directory of Open Access Journals (Sweden)

    Shlyakhov S.M.

    2017-06-01

    Full Text Available The present article is devoted to the task of finding of level of the secondary tangent voltages arising in sections because of a variable on porosity length. The decision of such task will allow to consider secondary tangent voltages in case of determination of bearing capacity of a porous bar. Distribution of porosity on a transverse section is set rationally - pro-ceeding from early the solved tasks on selection of porosity in case of torsion of a bar of a round transverse section, on bar length – under the linear law. A research objective is to determine the level of secondary tangent voltages and to evaluate from value.

  17. Photoluminescence Enhancement of Poly(3-methylthiophene Nanowires upon Length Variable DNA Hybridization

    Directory of Open Access Journals (Sweden)

    Jingyuan Huang

    2018-01-01

    Full Text Available The use of low-dimensional inorganic or organic nanomaterials has advantages for DNA and protein recognition due to their sensitivity, accuracy, and physical size matching. In this research, poly(3-methylthiophene (P3MT nanowires (NWs are electrochemically prepared with dopant followed by functionalization with probe DNA (pDNA sequence through electrostatic interaction. Various lengths of pDNA sequences (10-, 20- and 30-mer are conjugated to the P3MT NWs respectively followed with hybridization with their complementary target DNA (tDNA sequences. The nanoscale photoluminescence (PL properties of the P3MT NWs are studied throughout the whole process at solid state. In addition, the correlation between the PL enhancement and the double helix DNA with various lengths is demonstrated.

  18. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  19. Electric Arc Furnace Modeling with Artificial Neural Networks and Arc Length with Variable Voltage Gradient

    Directory of Open Access Journals (Sweden)

    Raul Garcia-Segura

    2017-09-01

    Full Text Available Electric arc furnaces (EAFs contribute to almost one third of the global steel production. Arc furnaces use a large amount of electrical energy to process scrap or reduced iron and are relevant to study because small improvements in their efficiency account for significant energy savings. Optimal controllers need to be designed and proposed to enhance both process performance and energy consumption. Due to the random and chaotic nature of the electric arcs, neural networks and other soft computing techniques have been used for modeling EAFs. This study proposes a methodology for modeling EAFs that considers the time varying arc length as a relevant input parameter to the arc furnace model. Based on actual voltages and current measurements taken from an arc furnace, it was possible to estimate an arc length suitable for modeling the arc furnace using neural networks. The obtained results show that the model reproduces not only the stable arc conditions but also the unstable arc conditions, which are difficult to identify in a real heat process. The presented model can be applied for the development and testing of control systems to improve furnace energy efficiency and productivity.

  20. The relationship between length of vocational disability, psychiatric illness, life stressors and sociodemographic variables.

    Science.gov (United States)

    Chandarana, P; Jackson, T; Kohr, R; Iezzi, T

    1997-01-01

    The primary objective of this study was to examine the relationship between vocational disability, psychiatric illness, life stressors and sociodemographic factors. Information on a variety of variables was obtained from insurance files of 147 subjects who had submitted claims for monetary compensation on grounds of psychiatric symptoms. The majority of subjects received a diagnosis of mood disorder or anxiety disorder. Extended vocational disability was associated with longer duration of psychiatric illness, rating of poorer prognosis by the treating physician, and lower income and occupational levels. Individuals with recent onset of disability reported more stressors than those experiencing extended disability. Although longer duration of psychiatric illness was associated with vocational disability, other variables play an important role in accounting for extended vocational disability.

  1. Clustering and artificial neural networks: classification of variable lengths of Helminth antigens in set of domains

    Directory of Open Access Journals (Sweden)

    Thiago de Souza Rodrigues

    2004-01-01

    Full Text Available A new scheme for representing proteins of different lengths in number of amino acids that can be presented to a fixed number of inputs Artificial Neural Networks (ANNs speel-out classification is described. K-Means's clustering of the new vectors with subsequent classification was then possible with the dimension reduction technique Principal Component Analysis applied previously. The new representation scheme was applied to a set of 112 antigens sequences from several parasitic helminths, selected in the National Center for Biotechnology Information and classified into fourth different groups. This bioinformatic tool permitted the establishment of a good correlation with domains that are already well characterized, regardless of the differences between the sequences that were confirmed by the PFAM database. Additionally, sequences were grouped according to their similarity, confirmed by hierarchical clustering using ClustalW.

  2. Length and GC content variability of introns among teleostean genomes in the light of the metabolic rate hypothesis.

    Directory of Open Access Journals (Sweden)

    Ankita Chaurasia

    Full Text Available A comparative analysis of five teleostean genomes, namely zebrafish, medaka, three-spine stickleback, fugu and pufferfish was performed with the aim to highlight the nature of the forces driving both length and base composition of introns (i.e., bpi and GCi. An inter-genome approach using orthologous intronic sequences was carried out, analyzing independently both variables in pairwise comparisons. An average length shortening of introns was observed at increasing average GCi values. The result was not affected by masking transposable and repetitive elements harbored in the intronic sequences. The routine metabolic rate (mass specific temperature-corrected using the Boltzmann's factor was measured for each species. A significant correlation held between average differences of metabolic rate, length and GC content, while environmental temperature of fish habitat was not correlated with bpi and GCi. Analyzing the concomitant effect of both variables, i.e., bpi and GCi, at increasing genomic GC content, a decrease of bpi and an increase of GCi was observed for the significant majority of the intronic sequences (from ∼ 40% to ∼ 90%, in each pairwise comparison. The opposite event, concomitant increase of bpi and decrease of GCi, was counter selected (from <1% to ∼ 10%, in each pairwise comparison. The results further support the hypothesis that the metabolic rate plays a key role in shaping genome architecture and evolution of vertebrate genomes.

  3. Effects of Unpredictable Variable Prenatal Stress (UVPS) on Bdnf DNA Methylation and Telomere Length in the Adult Rat Brain

    Science.gov (United States)

    Blaze, Jennifer; Asok, A.; Moyer, E. L.; Roth, T. L.; Ronca, A. E.

    2015-01-01

    In utero exposure to stress can shape neurobiological and behavioral outcomes in offspring, producing vulnerability to psychopathology later in life. Animal models of prenatal stress likewise have demonstrated long-­-term alterations in brain function and behavioral deficits in offspring. For example, using a rodent model of unpredictable variable prenatal stress (UVPS), in which dams are exposed to unpredictable, variable stress across pregnancy, we have found increased body weight and anxiety-­-like behavior in adult male, but not female, offspring. DNA methylation (addition of methyl groups to cytosines which normally represses gene transcription) and changes in telomere length (TTAGGG repeats on the ends of chromosomes) are two molecular modifications that result from stress and could be responsible for the long-­-term effects of UVPS. Here, we measured methylation of brain-­-derived neurotrophic factor (bdnf), a gene important in development and plasticity, and telomere length in the brains of adult offspring from the UVPS model. Results indicate that prenatally stressed adult males have greater methylation in the medial prefrontal cortex (mPFC) compared to non-­-stressed controls, while females have greater methylation in the ventral hippocampus compared to controls. Further, prenatally stressed males had shorter telomeres than controls in the mPFC. These findings demonstrate the ability of UVPS to produce epigenetic alterations and changes in telomere length across behaviorally-­-relevant brain regions, which may have linkages to the phenotypic outcomes.

  4. Length and GC content variability of introns among teleostean genomes in the light of the metabolic rate hypothesis.

    Science.gov (United States)

    Chaurasia, Ankita; Tarallo, Andrea; Bernà, Luisa; Yagi, Mitsuharu; Agnisola, Claudio; D'Onofrio, Giuseppe

    2014-01-01

    A comparative analysis of five teleostean genomes, namely zebrafish, medaka, three-spine stickleback, fugu and pufferfish was performed with the aim to highlight the nature of the forces driving both length and base composition of introns (i.e., bpi and GCi). An inter-genome approach using orthologous intronic sequences was carried out, analyzing independently both variables in pairwise comparisons. An average length shortening of introns was observed at increasing average GCi values. The result was not affected by masking transposable and repetitive elements harbored in the intronic sequences. The routine metabolic rate (mass specific temperature-corrected using the Boltzmann's factor) was measured for each species. A significant correlation held between average differences of metabolic rate, length and GC content, while environmental temperature of fish habitat was not correlated with bpi and GCi. Analyzing the concomitant effect of both variables, i.e., bpi and GCi, at increasing genomic GC content, a decrease of bpi and an increase of GCi was observed for the significant majority of the intronic sequences (from ∼ 40% to ∼ 90%, in each pairwise comparison). The opposite event, concomitant increase of bpi and decrease of GCi, was counter selected (from hypothesis that the metabolic rate plays a key role in shaping genome architecture and evolution of vertebrate genomes.

  5. Influence of Coding Variability in APP-Aβ Metabolism Genes in Sporadic Alzheimer's Disease.

    Directory of Open Access Journals (Sweden)

    Celeste Sassi

    Full Text Available The cerebral deposition of Aβ42, a neurotoxic proteolytic derivate of amyloid precursor protein (APP, is a central event in Alzheimer's disease (AD(Amyloid hypothesis. Given the key role of APP-Aβ metabolism in AD pathogenesis, we selected 29 genes involved in APP processing, Aβ degradation and clearance. We then used exome and genome sequencing to investigate the single independent (single-variant association test and cumulative (gene-based association test effect of coding variants in these genes as potential susceptibility factors for AD, in a cohort composed of 332 sporadic and mainly late-onset AD cases and 676 elderly controls from North America and the UK. Our study shows that common coding variability in these genes does not play a major role for the disease development. In the single-variant association analysis, the main hits, none of which statistically significant after multiple testing correction (1.9e-4coding variants (0.009%coding variability in APP-Aβ genes is not a critical factor for AD development and 2 Aβ degradation and clearance, rather than Aβ production, may play a key role in the etiology of sporadic AD.

  6. Supplemental Dietary Inulin of Variable Chain Lengths Alters Intestinal Bacterial Populations in Young Pigs123

    Science.gov (United States)

    Patterson, Jannine K.; Yasuda, Koji; Welch, Ross M.; Miller, Dennis D.; Lei, Xin Gen

    2010-01-01

    Previously, we showed that supplementation of diets with short-chain inulin (P95), long-chain inulin (HP), and a 50:50 mixture of both (Synergy 1) improved body iron status and altered expression of the genes involved in iron homeostasis and inflammation in young pigs. However, the effects of these 3 types of inulin on intestinal bacteria remain unknown. Applying terminal restriction fragment length polymorphism analysis, we determined the abundances of luminal and adherent bacterial populations from 6 segments of the small and large intestines of pigs (n = 4 for each group) fed an iron-deficient basal diet (BD) or the BD supplemented with 4% of P95, Synergy 1, or HP for 5 wk. Compared with BD, all 3 types of inulin enhanced (P inulin on bacterial populations in the lumen contents were found. Meanwhile, all 3 types of inulin suppressed the less desirable bacteria Clostridium spp. and members of the Enterobacteriaceae in the lumen and mucosa of various gut segments. Our findings suggest that the ability of dietary inulin to alter intestinal bacterial populations may partially account for its iron bioavailability-promoting effect and possibly other health benefits. PMID:20980641

  7. Estimating the hemodynamic influence of variable main body-to-iliac limb length ratios in aortic endografts.

    Science.gov (United States)

    Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S

    2018-02-01

    We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable

  8. VARIABILITY OF VALUES OF PHYSICOCHEMICAL WATER QUALITY INDICES ALONG THE LENGTH OF THE IWONICZANKA STREAM

    Directory of Open Access Journals (Sweden)

    Andrzej Bogdał

    2015-11-01

    Full Text Available The paper aims at presentation of the effect of changes in the catchment area management on the value of water quality physicochemical indices along the length of the Iwoniczanka stream, which flows through Iwonicz-Zdrój, one of the oldest health resorts in Poland. Analyses of 14 water quality indices were conducted from November 2013 to May 2014 in five measurement points: two situated in the upper course of the stream – in forest areas, two located in the area of Iwonicz-Zdrój town, and one below the rural built-up area. On the basis of the conducted data analysis it was found that the mean values of pH, electrolytic conductivity, sulphates, calcium, total iron and manganese were increasing with the course of flowing water, as evidenced by the water enrichment in substances which had their sources in built-up areas. On average, the highest values of biogenic indices and chlorides but the lowest values of oxygen indices were registered immediately below the location of drain collector from the closed sewage treatment plant, which resulted in pollution of the analysed stream bed with the substances previously drained from the treatment plant. Water flowing through the forest areas had the maximum ecological potential in the built-up areas and due to phosphate concentrations it was classified to class II and then, due to self-purification, returned to the physicochemical parameters appropriate for class I water. The conducted hydro-chemical tests confirmed a significant negative effect of built-up areas on the quality of the flowing waters.

  9. A highly efficient SDRAM controller supporting variable-length burst access and batch process for discrete reads

    Science.gov (United States)

    Li, Nan; Wang, Junzheng

    2016-03-01

    A highly efficient Synchronous Dynamic Random Access Memory (SDRAM) controller supporting variable-length burst access and batch process for discrete reads is proposed in this paper. Based on the Principle of Locality, command First In First Out (FIFO) and address range detector are designed within this controller to accelerate its responses to discrete read requests, which dramatically improves the average Effective Bus Utilization Ratio (EBUR) of SDRAM. Our controller is finally verified by driving the Micron 256-Mb SDRAM MT48LC16M16A2. Successful simulation and verification results show that our controller exhibits much higher EBUR than do most existing designs in case of discrete reads.

  10. Using Variable-Length Aligned Fragment Pairs and an Improved Transition Function for Flexible Protein Structure Alignment.

    Science.gov (United States)

    Cao, Hu; Lu, Yonggang

    2017-01-01

    With the rapid growth of known protein 3D structures in number, how to efficiently compare protein structures becomes an essential and challenging problem in computational structural biology. At present, many protein structure alignment methods have been developed. Among all these methods, flexible structure alignment methods are shown to be superior to rigid structure alignment methods in identifying structure similarities between proteins, which have gone through conformational changes. It is also found that the methods based on aligned fragment pairs (AFPs) have a special advantage over other approaches in balancing global structure similarities and local structure similarities. Accordingly, we propose a new flexible protein structure alignment method based on variable-length AFPs. Compared with other methods, the proposed method possesses three main advantages. First, it is based on variable-length AFPs. The length of each AFP is separately determined to maximally represent a local similar structure fragment, which reduces the number of AFPs. Second, it uses local coordinate systems, which simplify the computation at each step of the expansion of AFPs during the AFP identification. Third, it decreases the number of twists by rewarding the situation where nonconsecutive AFPs share the same transformation in the alignment, which is realized by dynamic programming with an improved transition function. The experimental data show that compared with FlexProt, FATCAT, and FlexSnap, the proposed method can achieve comparable results by introducing fewer twists. Meanwhile, it can generate results similar to those of the FATCAT method in much less running time due to the reduced number of AFPs.

  11. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  12. Deterministic Quantum Secure Direct Communication with Dense Coding and Continuous Variable Operations

    International Nuclear Information System (INIS)

    Han Lianfang; Chen Yueming; Yuan Hao

    2009-01-01

    We propose a deterministic quantum secure direct communication protocol by using dense coding. The two check photon sequences are used to check the securities of the channels between the message sender and the receiver. The continuous variable operations instead of the usual discrete unitary operations are performed on the travel photons so that the security of the present protocol can be enhanced. Therefore some specific attacks such as denial-of-service attack, intercept-measure-resend attack and invisible photon attack can be prevented in ideal quantum channel. In addition, the scheme is still secure in noise channel. Furthermore, this protocol has the advantage of high capacity and can be realized in the experiment. (general)

  13. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    Science.gov (United States)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  14. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  15. Federal Logistics Information System (FLIS) Procedures Manual. Volume 8. Document Identifier Code Input/Output Formats (Fixed Length)

    Science.gov (United States)

    1994-07-01

    REQUIRED MIX OF SEGMENTS OR INDIVIDUAL DATA ELEMENTS TO BE EXTRACTED. IN SEGMENT R ON AN INTERROGATION TRANSACTION (LTI), DATA RECORD NUMBER (DRN 0950) ONLY...and zation and Marketing input DICs. insert the Continuation Indicator Code (DRN 8555) in position 80 of this record. Maximum of OF The assigned NSN...for Procurement KFR, File Data Minus Security Classified Characteristics Data KFC 8.5-2 DoD 4100.39-M Volume 8 CHAPTER 5 ALPHABETIC INDEX OF DIC

  16. Variability in interhospital trauma data coding and scoring: A challenge to the accuracy of aggregated trauma registries.

    Science.gov (United States)

    Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven

    2015-09-01

    Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the

  17. Interannual variations in length-of-day (LOD) as a tool to assess climate variability and climate change

    Science.gov (United States)

    Lehmann, E.

    2016-12-01

    On interannual time scales the atmosphere affects significantly fluctuations in the geodetic quantity of length-of-day (LOD). This effect is directly proportional to perturbations in the relative angular momentum of the atmosphere (AAM) computed from zonal winds. During El Niño events tropospheric westerlies increase due to elevated sea surface temperatures (SST) in the Pacific inducing peak anomalies in relative AAM and correspondingly, in LOD. However, El Niño events affect LOD variations differently strong and the causes of this varying effect are yet not clear. Here, we investigate the LOD-El Niño relationship in the 20th and 21st century (1982-2100) whether the quantity of LOD can be used as a geophysical tool to assess variability and change in a future climate. In our analysis we applied a windowed discrete Fourier transform on all de-seasonalized data to remove climatic signals outside of the El Niño frequency band. LOD (data: IERS) was related in space and time to relative AAM and SSTs (data: ERA-40 reanalysis, IPCC ECHAM05-OM1 20C, A1B). Results from mapped Pearson correlation coefficients and time frequency behavior analysis identified a teleconnection pattern that we term the EN≥65%-index. The EN≥65%-index prescribes a significant change in variation in length-of-day of +65% and more related to (1) SST anomalies of >2° in the Pacific Niño region (160°E-80°W, 5°S-5°N), (2) corresponding stratospheric warming anomalies of the quasi-biennial oscillation (QBO), and (3) strong westerly winds in the lower equatorial stratosphere. In our analysis we show that the coupled atmosphere-ocean conditions prescribed in the EN≥65%-index apply to the extreme El Niño events of 19982/83 and 1997/98, and to 75% of all El Niño events in the last third of the 21st century. At that period of time the EN≥65%-index describes a projected altered base state of the equatorial Pacific that shows almost continuous El Niño conditions under climate warming.

  18. Improvement of genome assembly completeness and identification of novel full-length protein-coding genes by RNA-seq in the giant panda genome.

    Science.gov (United States)

    Chen, Meili; Hu, Yibo; Liu, Jingxing; Wu, Qi; Zhang, Chenglin; Yu, Jun; Xiao, Jingfa; Wei, Fuwen; Wu, Jiayan

    2015-12-11

    High-quality and complete gene models are the basis of whole genome analyses. The giant panda (Ailuropoda melanoleuca) genome was the first genome sequenced on the basis of solely short reads, but the genome annotation had lacked the support of transcriptomic evidence. In this study, we applied RNA-seq to globally improve the genome assembly completeness and to detect novel expressed transcripts in 12 tissues from giant pandas, by using a transcriptome reconstruction strategy that combined reference-based and de novo methods. Several aspects of genome assembly completeness in the transcribed regions were effectively improved by the de novo assembled transcripts, including genome scaffolding, the detection of small-size assembly errors, the extension of scaffold/contig boundaries, and gap closure. Through expression and homology validation, we detected three groups of novel full-length protein-coding genes. A total of 12.62% of the novel protein-coding genes were validated by proteomic data. GO annotation analysis showed that some of the novel protein-coding genes were involved in pigmentation, anatomical structure formation and reproduction, which might be related to the development and evolution of the black-white pelage, pseudo-thumb and delayed embryonic implantation of giant pandas. The updated genome annotation will help further giant panda studies from both structural and functional perspectives.

  19. Describing the interannual variability of precipitation with the derived distribution approach: effects of record length and resolution

    Directory of Open Access Journals (Sweden)

    C. I. Meier

    2016-10-01

    Full Text Available Interannual variability of precipitation is traditionally described by fitting a probability model to yearly precipitation totals. There are three potential problems with this approach: a long record (at least 25–30 years is required in order to fit the model, years with missing rainfall data cannot be used, and the data need to be homogeneous, i.e., one has to assume stationarity. To overcome some of these limitations, we test an alternative methodology proposed by Eagleson (1978, based on the derived distribution (DD approach. It allows estimation of the probability density function (pdf of annual rainfall without requiring long records, provided that continuously gauged precipitation data are available to derive external storm properties. The DD approach combines marginal pdfs for storm depths and inter-arrival times to obtain an analytical formulation of the distribution of annual precipitation, under the simplifying assumptions of independence between events and independence between storm depth and time to the next storm. Because it is based on information about storms and not on annual totals, the DD can make use of information from years with incomplete data; more importantly, only a few years of rainfall measurements should suffice to estimate the parameters of the marginal pdfs, at least at locations where it rains with some regularity. For two temperate locations in different climates (Concepción, Chile, and Lugano, Switzerland, we randomly resample shortened time series to evaluate in detail the effects of record length on the DD, comparing the results with the traditional approach of fitting a normal (or lognormal distribution. Then, at the same two stations, we assess the biases introduced in the DD when using daily totalized rainfall, instead of continuously gauged data. Finally, for randomly selected periods between 3 and 15 years in length, we conduct full blind tests at 52 high-quality gauging stations in Switzerland

  20. Real-time monitoring of ischemic and contralateral brain pO2 during stroke by variable length multisite resonators.

    Science.gov (United States)

    Hou, Huagang; Li, Hongbin; Dong, Ruhong; Khan, Nadeem; Swartz, Harold

    2014-06-01

    Electron paramagnetic resonance (EPR) oximetry using variable length multi-probe implantable resonator (IR), was used to investigate the temporal changes in the ischemic and contralateral brain pO2 during stroke in rats. The EPR signal to noise ratio (S/N) of the IR with four sensor loops at a depth of up to 11 mm were compared with direct implantation of lithium phthalocyanine (LiPc, oximetry probe) deposits in vitro. These IRs were used to follow the temporal changes in pO2 at two sites in each hemisphere during ischemia induced by left middle cerebral artery occlusion (MCAO) in rats breathing 30% O2 or 100% O2. The S/N ratios of the IRs were significantly greater than the LiPc deposits. A similar pO2 at two sites in each hemisphere prior to the onset of ischemia was observed in rats breathing 30% O2. However, a significant decline in the pO2 of the left cortex and striatum occurred during ischemia, but no change in the pO2 of the contralateral brain was observed. A significant increase in the pO2 of only the contralateral non-ischemic brain was observed in the rats breathing 100% O2. No significant difference in the infarct volume was evident between the animals breathing 30% O2 or 100% O2 during ischemia. EPR oximetry with IRs can repeatedly assess temporal changes in the brain pO2 at four sites simultaneously during stroke. This oximetry approach can be used to test and develop interventions to rescue ischemic tissue by modulating cerebral pO2 during stroke. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Genetic variability of the length of postpartum anoestrus in Charolais cows and its relationship with age at puberty

    Directory of Open Access Journals (Sweden)

    Ménissier François

    2000-07-01

    Full Text Available Abstract Fertility records (n = 1 802 were collected from 615 Charolais primiparous and multiparous cows managed in an experimental herd over an 11-year period. The objectives of the study were to describe the genetic variability of the re-establishment of postpartum reproductive activity and the relationship with body weight (BW and body condition score (BCS at calving and age at puberty. The length of postpartum anoestrus was estimated based on weekly blood progesterone assays and on twice daily detection of oestrus behaviour. The first oestrus behaviour was observed 69 days (± 25 days s.d. post-calving and the first positive progesterone measurement (≥ 1 ng mL-1 was observed at 66 days (± 22 days s.d. for the group of easy-calving multiparous suckling cows. Estimates of heritability and repeatability were h2 = 0.12 and r = 0.38 respectively, for the interval from calving to first oestrus (ICO. Corresponding values were h2 = 0.35 and r = 0.60 for the interval from calving to the first positive progesterone test (ICP. The genetic correlation between both criteria was high (rg = 0.98. The genetic relationships between postpartum intervals and BW and BCS of the female at calving were negative: the genetic aptitude to be heavier at calving and to have high body reserves was related to shorter postpartum intervals. A favourable genetic correlation between age at puberty and postpartum intervals was found (rg between 0.45 and 0.70. The heifers which were genetically younger at puberty also had shorter postpartum intervals.

  2. 1×4 Optical packet switching of variable length 640 Gbit/s data packets using in-band optical notch-filter labeling

    DEFF Research Database (Denmark)

    Medhin, Ashenafi Kiros; Kamchevska, Valerija; Galili, Michael

    2014-01-01

    We experimentally perform 1×4 optical packet switching of variable length 640 Gbit/s OTDM data packets using in-band notch-filter labeling with only 2.7-dB penalty. Up to 8 notches are employed to demonstrate scalability of the labeling scheme to 1×256 switching operation.......We experimentally perform 1×4 optical packet switching of variable length 640 Gbit/s OTDM data packets using in-band notch-filter labeling with only 2.7-dB penalty. Up to 8 notches are employed to demonstrate scalability of the labeling scheme to 1×256 switching operation....

  3. Cellular and circuit mechanisms maintain low spike co-variability and enhance population coding in somatosensory cortex

    Directory of Open Access Journals (Sweden)

    Cheng eLy

    2012-03-01

    Full Text Available The responses of cortical neurons are highly variable across repeated presentations of a stimulus. Understanding this variability is critical for theories of both sensory and motor processing, since response variance affects the accuracy of neural codes. Despite this influence, the cellular and circuit mechanisms that shape the trial-to-trial variability of population responses remain poorly understood. We used a combination of experimental and computational techniques to uncover the mechanisms underlying response variability of populations of pyramidal (E cells in layer 2/3 of rat whisker barrel cortex. Spike trains recorded from pairs of E-cells during either spontaneous activity or whisker deflected responses show similarly low levels of spiking co-variability, despite large differences in network activation between the two states. We developed network models that show how spike threshold nonlinearities dilutes E-cell spiking co-variability during spontaneous activity and low velocity whisker deflections. In contrast, during high velocity whisker deflections, cancelation mechanisms mediated by feedforward inhibition maintain low E-cell pairwise co-variability. Thus, the combination of these two mechanisms ensure low E-cell population variability over a wide range of whisker deflection velocities. Finally, we show how this active decorrelation of population variability leads to a drastic increase in the population information about whisker velocity. The canonical cellular and circuit components of our study suggest that low network variability over a broad range of neural states may generalize across the nervous system.

  4. FODA/IBEA satellite access scheme for MIXED traffic at variable bit and coding rates system description

    OpenAIRE

    Celandroni, Nedo; Ferro, Erina; Mihal, Vlado; Potort?, Francesco

    1992-01-01

    This report describes the FODA system working at variable coding and bit rates (FODA/IBEA-TDMA) FODA/IBEA is the natural evolution of the FODA-TDMA satellite access scheme working at 2 Mbit/s fixed rate with data 1/2 coded or uncoded. FODA-TDMA was used in the European SATINE-II experiment [8]. We remind here that the term FODA/IBEA system is comprehensive of the FODA/IBEA-TDMA (1) satellite access scheme and of the hardware prototype realised by the Marconi R.C. (U.K.). Both of them come fro...

  5. Achievable Performance of Zero-Delay Variable-Rate Coding in Rate-Constrained Networked Control Systems with Channel Delay

    DEFF Research Database (Denmark)

    Barforooshan, Mohsen; Østergaard, Jan; Stavrou, Fotios

    2017-01-01

    This paper presents an upper bound on the minimum data rate required to achieve a prescribed closed-loop performance level in networked control systems (NCSs). The considered feedback loop includes a linear time-invariant (LTI) plant with single measurement output and single control input. Moreover......, in this NCS, a causal but otherwise unconstrained feedback system carries out zero-delay variable-rate coding, and control. Between the encoder and decoder, data is exchanged over a rate-limited noiseless digital channel with a known constant time delay. Here we propose a linear source-coding scheme...

  6. Buccal telomere length and its associations with cortisol, heart rate variability, heart rate, and blood pressure responses to an acute social evaluative stressor in college students.

    Science.gov (United States)

    Woody, Alex; Hamilton, Katrina; Livitz, Irina E; Figueroa, Wilson S; Zoccola, Peggy M

    2017-05-01

    Understanding the relationship between stress and telomere length (a marker of cellular aging) is of great interest for reducing aging-related disease and death. One important aspect of acute stress exposure that may underlie detrimental effects on health is physiological reactivity to the stressor. This study tested the relationship between buccal telomere length and physiological reactivity (salivary cortisol reactivity and total output, heart rate (HR) variability, blood pressure, and HR) to an acute psychosocial stressor in a sample of 77 (53% male) healthy young adults. Consistent with predictions, greater reductions in HR variability (HRV) in response to a stressor and greater cortisol output during the study session were associated with shorter relative buccal telomere length (i.e. greater cellular aging). However, the relationship between cortisol output and buccal telomere length became non-significant when adjusting for medication use. Contrary to past findings and study hypotheses, associations between cortisol, blood pressure, and HR reactivity and relative buccal telomere length were not significant. Overall, these findings may indicate there are limited and mixed associations between stress reactivity and telomere length across physiological systems.

  7. Clustering of Beijing genotype Mycobacterium tuberculosis isolates from the Mekong delta in Vietnam on the basis of variable number of tandem repeat versus restriction fragment length polymorphism typing.

    NARCIS (Netherlands)

    Huyen, M.N.; Kremer, K.; Lan, N.T.; Buu, T.N.; Cobelens, F.G.; Tiemersma, E.W.; Haas, P. de; Soolingen, D. van

    2013-01-01

    BACKGROUND: In comparison to restriction fragment length polymorphism (RFLP) typing, variable number of tandem repeat (VNTR) typing is easier to perform, faster and yields results in a simple, numerical format. Therefore, this technique has gained recognition as the new international gold standard

  8. Clustering of Beijing genotype Mycobacterium tuberculosis isolates from the Mekong delta in Vietnam on the basis of variable number of tandem repeat versus restriction fragment length polymorphism typing

    NARCIS (Netherlands)

    Huyen, Mai N. T.; Kremer, Kristin; Lan, Nguyen T. N.; Buu, Tran N.; Cobelens, Frank G. J.; Tiemersma, Edine W.; de Haas, Petra; van Soolingen, Dick

    2013-01-01

    In comparison to restriction fragment length polymorphism (RFLP) typing, variable number of tandem repeat (VNTR) typing is easier to perform, faster and yields results in a simple, numerical format. Therefore, this technique has gained recognition as the new international gold standard in typing of

  9. Method of Storing Raster Image in Run Lengths Having Variable Numbers of Bytes and Medium with Raster Image Thus Stored

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The invention implements a run-length file format with improved space-sav qualities. The file starts with a header in ASCII format and includes information such as...

  10. Rn3D: A finite element code for simulating gas flow and radon transport in variably saturated, nonisothermal porous media

    International Nuclear Information System (INIS)

    Holford, D.J.

    1994-01-01

    This document is a user's manual for the Rn3D finite element code. Rn3D was developed to simulate gas flow and radon transport in variably saturated, nonisothermal porous media. The Rn3D model is applicable to a wide range of problems involving radon transport in soil because it can simulate either steady-state or transient flow and transport in one-, two- or three-dimensions (including radially symmetric two-dimensional problems). The porous materials may be heterogeneous and anisotropic. This manual describes all pertinent mathematics related to the governing, boundary, and constitutive equations of the model, as well as the development of the finite element equations used in the code. Instructions are given for constructing Rn3D input files and executing the code, as well as a description of all output files generated by the code. Five verification problems are given that test various aspects of code operation, complete with example input files, FORTRAN programs for the respective analytical solutions, and plots of model results. An example simulation is presented to illustrate the type of problem Rn3D is designed to solve. Finally, instructions are given on how to convert Rn3D to simulate systems other than radon, air, and water

  11. VARIABILITY OF LENGTH OF STEM OF DETERMINATE AND INDETERMINATE CULTIVARS OF COMMON VETCH (VICIA SATIVA L. SSP. SATIVA AND ITS IMPACT ON SELECTED CROPPING FEATURES

    Directory of Open Access Journals (Sweden)

    Jadwiga ANDRZEJEWSKA

    2006-12-01

    Full Text Available In the years 2001 and 2002, the study was conducted in six experiments in order to examine the conditioning of the length of stem variability and its impact on cropping features of determinate and indeterminate cultivars of common vetch. Rainfall in June and July as well as during the whole growing season was positively correlated with length of stem, but negatively correlated with seed yield, to a larger extent in the group of indeterminate cultivars than in the determinate one. Duration of blooming stage, length of stem, and seed yield showed the largest variability in both groups. Increase in length of stem of plants of indeterminate cultivars led to the delay in maturation, to less even maturation, and to the decrease in the thousand seed weight and seed yield. Increase in length of stem of plants of determinate cultivars delayed reaching the phase of technical maturation and decreased evenness of plant maturation. Determinate growth of common vetch did not lead to the reduction of lodging.

  12. Controlled dense coding for continuous variables using three-particle entangled states

    CERN Document Server

    Jing Zhang; Kun Chi Peng; 10.1103/PhysRevA.66.032318

    2002-01-01

    A simple scheme to realize quantum controlled dense coding with a bright tripartite entangled state light generated from nondegenerate optical parametric amplifiers is proposed in this paper. The quantum channel between Alice and Bob is controlled by Claire. As a local oscillator and balanced homodyne detector are not needed, the proposed protocol is easy to be realized experimentally. (15 refs)

  13. Investigating the Effect of Recruitment Variability on Length-Based Recruitment Indices for Antarctic Krill Using an Individual-Based Population Dynamics Model

    Science.gov (United States)

    Thanassekos, Stéphane; Cox, Martin J.; Reid, Keith

    2014-01-01

    Antarctic krill (Euphausia superba; herein krill) is monitored as part of an on-going fisheries observer program that collects length-frequency data. A krill feedback management programme is currently being developed, and as part of this development, the utility of data-derived indices describing population level processes is being assessed. To date, however, little work has been carried out on the selection of optimum recruitment indices and it has not been possible to assess the performance of length-based recruitment indices across a range of recruitment variability. Neither has there been an assessment of uncertainty in the relationship between an index and the actual level of recruitment. Thus, until now, it has not been possible to take into account recruitment index uncertainty in krill stock management or when investigating relationships between recruitment and environmental drivers. Using length-frequency samples from a simulated population – where recruitment is known – the performance of six potential length-based recruitment indices is assessed, by exploring the index-to-recruitment relationship under increasing levels of recruitment variability (from ±10% to ±100% around a mean annual recruitment). The annual minimum of the proportion of individuals smaller than 40 mm (F40 min, %) was selected because it had the most robust index-to-recruitment relationship across differing levels of recruitment variability. The relationship was curvilinear and best described by a power law. Model uncertainty was described using the 95% prediction intervals, which were used to calculate coverage probabilities and assess model performance. Despite being the optimum recruitment index, the performance of F40 min degraded under high (>50%) recruitment variability. Due to the persistence of cohorts in the population over several years, the inclusion of F40 min values from preceding years in the relationship used to estimate recruitment in a given year improved its

  14. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  15. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  16. Validation of the Danish 7-day pre-coded food diary among adults: energy intake v. energy expenditure and recording length

    DEFF Research Database (Denmark)

    Biltoft-Jensen, Anja Pia; Matthiessen, Jeppe; Rasmussen, Lone Banke

    2009-01-01

    Under-reporting of energy intake (EI) is a well-known problem when measuring dietary intake in free-living populations. The present study aimed at quantifying misreporting by comparing EI estimated from the Danish pre-coded food diary against energy expenditure (EE) measured with a validated...... position-and-motion instrument (ActiReg®). Further, the influence of recording length on EI:BMR, percentage consumers, the number of meal occasions and recorded food items per meal was examined. A total of 138 Danish volunteers aged 20–59 years wore the ActiReg® and recorded their food intake for 7...... for EI and EE were − 6·29 and 3·09 MJ/d. Of the participants, 73 % were classified as acceptable reporters, 26 % as under-reporters and 1 % as over-reporters. EI:BMR was significantly lower on 1–3 consecutive recording days compared with 4–7 recording days (P food...

  17. Motion of variable-length MreB filaments at the bacterial cell membrane influences cell morphology

    OpenAIRE

    Reimold, Christian; Defeu Soufo, Herve Joel; Dempwolff, Felix; Graumann, Peter L.

    2013-01-01

    The maintenance of rod-cell shape in many bacteria depends on actin-like MreB proteins and several membrane proteins that interact with MreB. Using superresolution microscopy, we show that at 50-nm resolution, Bacillus subtilis MreB forms filamentous structures of length up to 3.4 ?m underneath the cell membrane, which run at angles diverging up to 40? relative to the cell circumference. MreB from Escherichia coli forms at least 1.4-?m-long filaments. MreB filaments move along various tracks ...

  18. Motion of variable-length MreB filaments at the bacterial cell membrane influences cell morphology.

    Science.gov (United States)

    Reimold, Christian; Defeu Soufo, Herve Joel; Dempwolff, Felix; Graumann, Peter L

    2013-08-01

    The maintenance of rod-cell shape in many bacteria depends on actin-like MreB proteins and several membrane proteins that interact with MreB. Using superresolution microscopy, we show that at 50-nm resolution, Bacillus subtilis MreB forms filamentous structures of length up to 3.4 μm underneath the cell membrane, which run at angles diverging up to 40° relative to the cell circumference. MreB from Escherichia coli forms at least 1.4-μm-long filaments. MreB filaments move along various tracks with a maximal speed of 85 nm/s, and the loss of ATPase activity leads to the formation of extended and static filaments. Suboptimal growth conditions lead to formation of patch-like structures rather than extended filaments. Coexpression of wild-type MreB with MreB mutated in the subunit interface leads to formation of shorter MreB filaments and a strong effect on cell shape, revealing a link between filament length and cell morphology. Thus MreB has an extended-filament architecture with the potential to position membrane proteins over long distances, whose localization in turn may affect the shape of the cell wall.

  19. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    Science.gov (United States)

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  20. Nutritional assessment: comparison of clinical assessment and objective variables for the prediction of length of hospital stay and readmission.

    Science.gov (United States)

    Jeejeebhoy, Khursheed N; Keller, Heather; Gramlich, Leah; Allard, Johane P; Laporte, Manon; Duerksen, Donald R; Payette, Helene; Bernier, Paule; Vesnaver, Elisabeth; Davidson, Bridget; Teterina, Anastasia; Lou, Wendy

    2015-05-01

    Nutritional assessment commonly includes multiple nutrition indicators (NIs). To promote efficiency, a minimum set is needed for the diagnosis of malnutrition in the acute care setting. The objective was to compare the ability of different NIs to predict outcomes of length of hospital stay and readmission to refine the detection of malnutrition in acute care. This was a prospective cohort study of 1022 patients recruited from 18 acute care hospitals (academic and community), from 8 provinces across Canada, between 1 July 2010 and 28 February 2013. Participants were patients aged ≥18 y admitted to medical and surgical wards. NIs measured at admission were subjective global assessment (SGA; SGA A = well nourished, SGA B = mild or moderate malnutrition, and SGA C = severe malnutrition), Nutrition Risk Screening (2002), body weight, midarm and calf circumference, serum albumin, handgrip strength (HGS), and patient-self assessment of food intake. Logistic regression determined the independent effect of NIs on the outcomes of length of hospital stay (available for analysis. After we controlled for age, sex, and diagnosis, only SGA C (OR: 2.19; 95% CI: 1.28, 3.75), HGS (OR: 0.98; 95% CI: 0.96, 0.99 per kg of increase), and reduced food intake during the first week of hospitalization (OR: 1.51; 95% CI: 1.08, 2.11) were independent predictors of length of stay. SGA C (OR: 2.12; 95% CI: 1.24, 3.93) and HGS (OR: 0.96; 95% CI: 0.94, 0.98) but not food intake were independent predictors of 30-d readmission. SGA, HGS, and food intake were independent predictors of outcomes for malnutrition. Because food intake in this study was judged days after admission and HGS has a wide range of normal values, SGA is the single best predictor and should be advocated as the primary measure for diagnosis of malnutrition. This study was registered at clinicaltrials.gov as NCT02351661. © 2015 American Society for Nutrition.

  1. Length of Variable Numbers of Tandem Repeats in the Carboxyl Ester Lipase (CEL) Gene May Confer Susceptibility to Alcoholic Liver Cirrhosis but Not Alcoholic Chronic Pancreatitis.

    Science.gov (United States)

    Fjeld, Karianne; Beer, Sebastian; Johnstone, Marianne; Zimmer, Constantin; Mössner, Joachim; Ruffert, Claudia; Krehan, Mario; Zapf, Christian; Njølstad, Pål Rasmus; Johansson, Stefan; Bugert, Peter; Miyajima, Fabio; Liloglou, Triantafillos; Brown, Laura J; Winn, Simon A; Davies, Kelly; Latawiec, Diane; Gunson, Bridget K; Criddle, David N; Pirmohamed, Munir; Grützmann, Robert; Michl, Patrick; Greenhalf, William; Molven, Anders; Sutton, Robert; Rosendahl, Jonas

    2016-01-01

    Carboxyl-ester lipase (CEL) contributes to fatty acid ethyl ester metabolism, which is implicated in alcoholic pancreatitis. The CEL gene harbours a variable number of tandem repeats (VNTR) region in exon 11. Variation in this VNTR has been linked to monogenic pancreatic disease, while conflicting results were reported for chronic pancreatitis (CP). Here, we aimed to investigate a potential association of CEL VNTR lengths with alcoholic CP. Overall, 395 alcoholic CP patients, 218 patients with alcoholic liver cirrhosis (ALC) serving as controls with a comparable amount of alcohol consumed, and 327 healthy controls from Germany and the United Kingdom (UK) were analysed by determination of fragment lengths by capillary electrophoresis. Allele frequencies and genotypes of different VNTR categories were compared between the groups. Twelve repeats were overrepresented in UK ACP patients (P = 0.04) compared to controls, whereas twelve repeats were enriched in German ALC compared to alcoholic CP patients (P = 0.03). Frequencies of CEL VNTR lengths of 14 and 15 repeats differed between German ALC patients and healthy controls (P = 0.03 and 0.008, respectively). However, in the genotype and pooled analysis of VNTR lengths no statistical significant association was depicted. Additionally, the 16-16 genotype as well as 16 repeats were more frequent in UK ALC than in alcoholic CP patients (P = 0.034 and 0.02, respectively). In all other calculations, including pooled German and UK data, allele frequencies and genotype distributions did not differ significantly between patients and controls or between alcoholic CP and ALC. We did not obtain evidence that CEL VNTR lengths are associated with alcoholic CP. However, our results suggest that CEL VNTR lengths might associate with ALC, a finding that needs to be clarified in larger cohorts.

  2. Analysis of Causes of Non-Uniform Flow Distribution in Manifold Systems with Variable Flow Rate along Length

    Science.gov (United States)

    Zemlyanaya, N. V.; Gulyakin, A. V.

    2017-11-01

    The uniformity of flow distribution in perforated manifolds is a relevant task. The efficiency of water supply, sewerage and perflation systems is determined by hydraulics of the flow with a variable mass. The extensive study of versatile available information showed that achieving a uniform flow distribution through all of the outlets is almost impossible. The analysis of the studies conducted by other authors and our numerical experiments performed with the help of the software package ANSYS 16.1 were made in this work. The results allowed us to formulate the main causes of non-uniform flow distribution. We decided to suggest a hypothesis to explain the static pressure rise problem at the end of a perforated manifold.

  3. Recombination events and variability among full-length genomes of co-circulating molluscum contagiosum virus subtypes 1 and 2.

    Science.gov (United States)

    López-Bueno, Alberto; Parras-Moltó, Marcos; López-Barrantes, Olivia; Belda, Sylvia; Alejo, Alí

    2017-05-01

    Molluscum contagiosum virus (MCV) is the sole member of the Molluscipoxvirus genus and causes a highly prevalent human disease of the skin characterized by the formation of a variable number of lesions that can persist for prolonged periods of time. Two major genotypes, subtype 1 and subtype 2, are recognized, although currently only a single complete genomic sequence corresponding to MCV subtype 1 is available. Using next-generation sequencing techniques, we report the complete genomic sequence of four new MCV isolates, including the first one derived from a subtype 2. Comparisons suggest a relatively distant evolutionary split between both MCV subtypes. Further, our data illustrate concurrent circulation of distinct viruses within a population and reveal the existence of recombination events among them. These results help identify a set of MCV genes with potentially relevant roles in molluscum contagiosum epidemiology and pathogenesis.

  4. Estimation of genetic variability and selection response for clutch length in dwarf brown-egg layers carrying or not the naked neck gene

    Directory of Open Access Journals (Sweden)

    Tixier-Boichard Michèle

    2003-03-01

    Full Text Available Abstract In order to investigate the possibility of using the dwarf gene for egg production, two dwarf brown-egg laying lines were selected for 16 generations on average clutch length; one line (L1 was normally feathered and the other (L2 was homozygous for the naked neck gene NA. A control line from the same base population, dwarf and segregating for the NA gene, was maintained during the selection experiment under random mating. The average clutch length was normalized using a Box-Cox transformation. Genetic variability and selection response were estimated either with the mixed model methodology, or with the classical methods for calculating genetic gain, as the deviation from the control line, and the realized heritability, as the ratio of the selection response on cumulative selection differentials. Heritability of average clutch length was estimated to be 0.42 ± 0.02, with a multiple trait animal model, whereas the estimates of the realized heritability were lower, being 0.28 and 0.22 in lines L1 and L2, respectively. REML estimates of heritability were found to decline with generations of selection, suggesting a departure from the infinitesimal model, either because a limited number of genes was involved, or their frequencies were changed. The yearly genetic gains in average clutch length, after normalization, were estimated to be 0.37 ± 0.02 and 0.33 ± 0.04 with the classical methods, 0.46 ± 0.02 and 0.43 ± 0.01 with animal model methodology, for lines L1 and L2 respectively, which represented about 30% of the genetic standard deviation on the transformed scale. Selection response appeared to be faster in line L2, homozygous for the NA gene, but the final cumulated selection response for clutch length was not different between the L1 and L2 lines at generation 16.

  5. Pseudo-polyprotein translated from the full-length ORF1 of capillovirus is important for pathogenicity, but a truncated ORF1 protein without variable and CP regions is sufficient for replication.

    Science.gov (United States)

    Hirata, Hisae; Yamaji, Yasuyuki; Komatsu, Ken; Kagiwada, Satoshi; Oshima, Kenro; Okano, Yukari; Takahashi, Shuichiro; Ugaki, Masashi; Namba, Shigetou

    2010-09-01

    The first open-reading frame (ORF) of the genus Capillovirus encodes an apparently chimeric polyprotein containing conserved regions for replicase (Rep) and coat protein (CP), while other viruses in the family Flexiviridae have separate ORFs encoding these proteins. To investigate the role of the full-length ORF1 polyprotein of capillovirus, we generated truncation mutants of ORF1 of apple stem grooving virus by inserting a termination codon into the variable region located between the putative Rep- and CP-coding regions. These mutants were capable of systemic infection, although their pathogenicity was attenuated. In vitro translation of ORF1 produced both the full-length polyprotein and the smaller Rep protein. The results of in vivo reporter assays suggested that the mechanism of this early termination is a ribosomal -1 frame-shift occurring downstream from the conserved Rep domains. The mechanism of capillovirus gene expression and the very close evolutionary relationship between the genera Capillovirus and Trichovirus are discussed. Copyright (c) 2010. Published by Elsevier B.V.

  6. An investigation into the variables associated with length of hospital stay related to primary cleft lip and palate surgery and alveolar bone grafting.

    Science.gov (United States)

    Izadi, N; Haers, P E

    2012-10-01

    This retrospective study evaluated variables associated with length of stay (LOS) in hospital for 406 admissions of primary cleft lip and palate and alveolus surgery between January 2007 and April 2009. Three patients were treated as day cases, 343 (84%) stayed one night, 48 (12%) stayed 2 nights and 12 (3%) stayed > 2 nights. Poisson regression analysis showed that there was no association between postoperative LOS and age, distance travelled, diagnosis and type of operation, with a p value > 0.2 for all variables. 60/406 patients stayed 2 nights or more postoperatively mostly due to poor pain control and inadequate oral intake. Patients with palate repair were more likely to have postoperative LOS > 1 night, compared to patients with lip repair, p value = 0.011. Four patients (1%), all of whom had undergone cleft palate surgery, were readmitted within 4 weeks of the operation due to respiratory obstruction or haemorrhage. Using logistic regression, evidence showed that these readmissions were related to a longer original postoperative LOS. This study shows that length of stay for primary cleft lip, palate and alveolus surgery can in most cases be limited to one night postoperatively, provided that adequate support can be provided at home. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  9. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  10. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  11. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  12. [Reproductive effort, fattening index and yield of Arca zebra (Filibranchia: Arcidae) by length and its association with environmental variables, Sucre, Venezuela].

    Science.gov (United States)

    Lista, María; Velásquez, Carlos; Prieto, Antulio; Longart, Yelipza

    2016-06-01

    Arca zebra is a mollusk of commercial value and a major socioeconomic fishery in Northeastern Venezuela. The present study aimed to evaluate the reproductive effort (RE), fattening index (FI) and yield (Y) in different size groups of A. zebra from the morro Chacopata, Venezuela. For this, monthly samplings from June 2008 and June 2009, were undertaken, and the bivalves obtained were distributed in three length groups: I (30.1 to 50.0 mm), II (50.1 to 70.0 mm) and III (> 70.0 mm). Monthly RE, FI and Y were determined based on bivalve changes in volume of fresh meat (VFM), intervalvar volume (IV), dry gonad biomass (DW), dry biomass of the organism without gonad (DWs), fresh biomass of meat (FBM) and total biomass including shell (TBIS). Besides, environmental variables such as temperature, salinity, dissolved oxygen, total organic and inorganic seston and chlorophyll a were measured monthly. There was great variation in the DW between length groups (relevant for II and III): increased from June until late September 2008, was followed by a marked decrease in October 2008, recovered in the following months, and decreased in January 2009, with a slight increase until May 2009; these changes were associated with variations in sea temperature. The weight of the gonad (DW) influenced the RE, FI and Y, as these reached their peaks in the months where there was higher gonadal production, indicating the influence of temperature on A. zebra reproduction.

  13. Arc Length Coding by Interference of Theta Frequency Oscillations May Underlie Context-Dependent Hippocampal Unit Data and Episodic Memory Function

    Science.gov (United States)

    Hasselmo, Michael E.

    2007-01-01

    Many memory models focus on encoding of sequences by excitatory recurrent synapses in region CA3 of the hippocampus. However, data and modeling suggest an alternate mechanism for encoding of sequences in which interference between theta frequency oscillations encodes the position within a sequence based on spatial arc length or time. Arc length…

  14. Characterisation of Toxoplasma gondii isolates using polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP) of the non-coding Toxoplasma gondii (TGR)-gene sequences

    DEFF Research Database (Denmark)

    Høgdall, Estrid; Vuust, Jens; Lind, Peter

    2000-01-01

    of using TGR gene variants as markers to distinguish among T. gondii isolates from different animals and different geographical sources. Based on the band patterns obtained by restriction fragment length polymorphism (RFLP) analysis of the polymerase chain reaction (PCR) amplified TGR sequences, the T...

  15. Coding completeness and quality of relative survival-related variables in the National Program of Cancer Registries Cancer Surveillance System, 1995-2008.

    Science.gov (United States)

    Wilson, Reda J; O'Neil, M E; Ntekop, E; Zhang, Kevin; Ren, Y

    2014-01-01

    Calculating accurate estimates of cancer survival is important for various analyses of cancer patient care and prognosis. Current US survival rates are estimated based on data from the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End RESULTS (SEER) program, covering approximately 28 percent of the US population. The National Program of Cancer Registries (NPCR) covers about 96 percent of the US population. Using a population-based database with greater US population coverage to calculate survival rates at the national, state, and regional levels can further enhance the effective monitoring of cancer patient care and prognosis in the United States. The first step is to establish the coding completeness and coding quality of the NPCR data needed for calculating survival rates and conducting related validation analyses. Using data from the NPCR-Cancer Surveillance System (CSS) from 1995 through 2008, we assessed coding completeness and quality on 26 data elements that are needed to calculate cancer relative survival estimates and conduct related analyses. Data elements evaluated consisted of demographic, follow-up, prognostic, and cancer identification variables. Analyses were performed showing trends of these variables by diagnostic year, state of residence at diagnosis, and cancer site. Mean overall percent coding completeness by each NPCR central cancer registry averaged across all data elements and diagnosis years ranged from 92.3 percent to 100 percent. RESULTS showing the mean percent coding completeness for the relative survival-related variables in NPCR data are presented. All data elements but 1 have a mean coding completeness greater than 90 percent as was the mean completeness by data item group type. Statistically significant differences in coding completeness were found in the ICD revision number, cause of death, vital status, and date of last contact variables when comparing diagnosis years. The majority of data items had a coding

  16. Cytomegalovirus sequence variability, amplicon length, and DNase-sensitive non-encapsidated genomes are obstacles to standardization and commutability of plasma viral load results.

    Science.gov (United States)

    Naegele, Klaudia; Lautenschlager, Irmeli; Gosert, Rainer; Loginov, Raisa; Bir, Katia; Helanterä, Ilkka; Schaub, Stefan; Khanna, Nina; Hirsch, Hans H

    2018-04-22

    Cytomegalovirus (CMV) management post-transplantation relies on quantification in blood, but inter-laboratory and inter-assay variability impairs commutability. An international multicenter study demonstrated that variability is mitigated by standardizing plasma volumes, automating DNA extraction and amplification, and calibration to the 1st-CMV-WHO-International-Standard as in the FDA-approved Roche-CAP/CTM-CMV. However, Roche-CAP/CTM-CMV showed under-quantification and false-negative results in a quality assurance program (UK-NEQAS-2014). To evaluate factors contributing to quantification variability of CMV viral load and to develop optimized CMV-UL54-QNAT. The UL54 target of the UK-NEQAS-2014 variant was sequenced and compared to 329 available CMV GenBank sequences. Four Basel-CMV-UL54-QNAT assays of 361 bp, 254 bp, 151 bp, and 95 bp amplicons were developed that only differed in reverse primer positions. The assays were validated using plasmid dilutions, UK-NEQAS-2014 sample, as well as 107 frozen and 69 prospectively collected plasma samples from transplant patients submitted for CMV QNAT, with and without DNase-digestion prior to nucleic acid extraction. Eight of 43 mutations were identified as relevant in the UK-NEQAS-2014 target. All Basel-CMV-UL54 QNATs quantified the UK-NEQAS-2014 but revealed 10-fold increasing CMV loads as amplicon size decreased. The inverse correlation of amplicon size and viral loads was confirmed using 1st-WHO-International-Standard and patient samples. DNase pre-treatment reduced plasma CMV loads by >90% indicating the presence of unprotected CMV genomic DNA. Sequence variability, amplicon length, and non-encapsidated genomes obstruct standardization and commutability of CMV loads needed to develop thresholds for clinical research and management. Besides regular sequence surveys, matrix and extraction standardization, we propose developing reference calibrators using 100 bp amplicons. Copyright © 2018 Elsevier B.V. All

  17. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  18. Quantum optical coherence can survive photon losses using a continuous-variable quantum erasure-correcting code

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander

    2010-01-01

    A fundamental requirement for enabling fault-tolerant quantum information processing is an efficient quantum error-correcting code that robustly protects the involved fragile quantum states from their environment. Just as classical error-correcting codes are indispensible in today's information...... technologies, it is believed that quantum error-correcting code will play a similarly crucial role in tomorrow's quantum information systems. Here, we report on the experimental demonstration of a quantum erasure-correcting code that overcomes the devastating effect of photon losses. Our quantum code is based...... on linear optics, and it protects a four-mode entangled mesoscopic state of light against erasures. We investigate two approaches for circumventing in-line losses, and demonstrate that both approaches exhibit transmission fidelities beyond what is possible by classical means. Because in-line attenuation...

  19. Clustering of Beijing genotype Mycobacterium tuberculosis isolates from the Mekong delta in Vietnam on the basis of variable number of tandem repeat versus restriction fragment length polymorphism typing

    Directory of Open Access Journals (Sweden)

    Huyen Mai NT

    2013-02-01

    Full Text Available Abstract Background In comparison to restriction fragment length polymorphism (RFLP typing, variable number of tandem repeat (VNTR typing is easier to perform, faster and yields results in a simple, numerical format. Therefore, this technique has gained recognition as the new international gold standard in typing of Mycobacterium tuberculosis. However, some reports indicated that VNTR typing may be less suitable for Beijing genotype isolates. We therefore compared the performance of internationally standardized RFLP and 24 loci VNTR typing to discriminate among 100 Beijing genotype isolates from the Southern Vietnam. Methods Hundred Beijing genotype strains defined by spoligotyping were randomly selected and typed by RFLP and VNTR typing. The discriminatory power of VNTR and RFLP typing was compared using the Bionumerics software. Results Among 95 Beijing strains available for analysis, 14 clusters were identified comprising 34 strains and 61 unique profiles in 24 loci VNTR typing ((Hunter Gaston Discrimination Index (HGDI = 0.994. 13 clusters containing 31 strains and 64 unique patterns in RFLP typing (HGDI = 0.994 were found. Nine RFLP clusters were subdivided by VNTR typing and 12 VNTR clusters were split by RFLP. Five isolates (5% revealing double alleles or no signal in two or more loci in VNTR typing could not be analyzed. Conclusions Overall, 24 loci VNTR typing and RFLP typing had similar high-level of discrimination among 95 Beijing strains from Southern Vietnam. However, loci VNTR 154, VNTR 2461 and VNTR 3171 had hardly added any value to the level of discrimination.

  20. A Heuristic T-S Fuzzy Model for the Pumped-Storage Generator-Motor Using Variable-Length Tree-Seed Algorithm-Based Competitive Agglomeration

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2018-04-01

    Full Text Available With the fast development of artificial intelligence techniques, data-driven modeling approaches are becoming hotspots in both academic research and engineering practice. This paper proposes a novel data-driven T-S fuzzy model to precisely describe the complicated dynamic behaviors of pumped storage generator motor (PSGM. In premise fuzzy partition of the proposed T-S fuzzy model, a novel variable-length tree-seed algorithm based competitive agglomeration (VTSA-CA algorithm is presented to determine the optimal number of clusters automatically and improve the fuzzy clustering performances. Besides, in order to promote modeling accuracy of PSGM, the input and output formats in the T-S fuzzy model are selected by an economical parameter controlled auto-regressive (CAR model derived from a high-order transfer function of PSGM considering the distributed components in the water diversion system of the power plant. The effectiveness and superiority of the T-S fuzzy model for PSGM under different working conditions are validated by performing comparative studies with both practical data and the conventional mechanistic model.

  1. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  2. Review of the margins for ASME code fatigue design curve - effects of surface roughness and material variability

    International Nuclear Information System (INIS)

    Chopra, O. K.; Shack, W. J.

    2003-01-01

    The ASME Boiler and Pressure Vessel Code provides rules for the construction of nuclear power plant components. The Code specifies fatigue design curves for structural materials. However, the effects of light water reactor (LWR) coolant environments are not explicitly addressed by the Code design curves. Existing fatigue strain-vs.-life ((var e psilon)-N) data illustrate potentially significant effects of LWR coolant environments on the fatigue resistance of pressure vessel and piping steels. This report provides an overview of the existing fatigue (var e psilon)-N data for carbon and low-alloy steels and wrought and cast austenitic SSs to define the effects of key material, loading, and environmental parameters on the fatigue lives of the steels. Experimental data are presented on the effects of surface roughness on the fatigue life of these steels in air and LWR environments. Statistical models are presented for estimating the fatigue (var e psilon)-N curves as a function of the material, loading, and environmental parameters. Two methods for incorporating environmental effects into the ASME Code fatigue evaluations are discussed. Data available in the literature have been reviewed to evaluate the conservatism in the existing ASME Code fatigue evaluations. A critical review of the margins for ASME Code fatigue design curves is presented

  3. Review of the margins for ASME code fatigue design curve - effects of surface roughness and material variability.

    Energy Technology Data Exchange (ETDEWEB)

    Chopra, O. K.; Shack, W. J.; Energy Technology

    2003-10-03

    The ASME Boiler and Pressure Vessel Code provides rules for the construction of nuclear power plant components. The Code specifies fatigue design curves for structural materials. However, the effects of light water reactor (LWR) coolant environments are not explicitly addressed by the Code design curves. Existing fatigue strain-vs.-life ({var_epsilon}-N) data illustrate potentially significant effects of LWR coolant environments on the fatigue resistance of pressure vessel and piping steels. This report provides an overview of the existing fatigue {var_epsilon}-N data for carbon and low-alloy steels and wrought and cast austenitic SSs to define the effects of key material, loading, and environmental parameters on the fatigue lives of the steels. Experimental data are presented on the effects of surface roughness on the fatigue life of these steels in air and LWR environments. Statistical models are presented for estimating the fatigue {var_epsilon}-N curves as a function of the material, loading, and environmental parameters. Two methods for incorporating environmental effects into the ASME Code fatigue evaluations are discussed. Data available in the literature have been reviewed to evaluate the conservatism in the existing ASME Code fatigue evaluations. A critical review of the margins for ASME Code fatigue design curves is presented.

  4. The impact of growing-season length variability on carbon assimilation and evapotranspiration over 88 years in the eastern US deciduous forest

    Science.gov (United States)

    White; Running; Thornton

    1999-02-01

    Recent research suggests that increases in growing-season length (GSL) in mid-northern latitudes may be partially responsible for increased forest growth and carbon sequestration. We used the BIOME-BGC ecosystem model to investigate the impacts of including a dynamically regulated GSL on simulated carbon and water balance over a historical 88-year record (1900-1987) for 12 sites in the eastern USA deciduous broadleaf forest. For individual sites, the predicted GSL regularly varied by more than 15 days. When grouped into three climatic zones, GSL variability was still large and rapid. There is a recent trend in colder, northern sites toward a longer GSL, but not in moderate and warm climates. The results show that, for all sites, prediction of a long GSL versus using the mean GSL increased net ecosystem production (NEP), gross primary production (GPP), and evapotranspiration (ET); conversely a short GSL is predicted to decrease these parameters. On an absolute basis, differences in GPP between the dynamic and mean GSL simulations were larger than the differences in NEP. As a percentage difference, though, NEP was much more sensitive to changes in GSL than were either GPP or ET. On average, a 1-day change in GSL changed NEP by 1.6%, GPP by 0.5%, and ET by 0.2%. Predictions of NEP and GPP in cold climates were more sensitive to changes in GSL than were predictions in warm climates. ET was not similarly sensitive. First, our results strongly agree with field measurements showing a high correlation between NEP and dates of spring growth, and second they suggest that persistent increases in GSL may lead to long-term increases in carbon storage.

  5. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  6. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  7. Comparative Study of IS6110 Restriction Fragment Length Polymorphism and Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Isolates in the Netherlands, Based on a 5-Year Nationwide Survey

    NARCIS (Netherlands)

    Beer, J.L. de; Ingen, J. van; Vries, G. de; Erkens, C.; Sebek, M.; Mulder, A.; Sloot, R.; Brandt, A.M. van den; Enaimi, M.; Kremer, K.; Supply, P.; Soolingen, D. van

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  8. Comparative study of IS6110 restriction fragment length polymorphism and variable-number tandem-repeat typing of Mycobacterium tuberculosis isolates in the Netherlands, based on a 5-year nationwide survey

    NARCIS (Netherlands)

    de Beer, Jessica L.; van Ingen, Jakko; de Vries, Gerard; Erkens, Connie; Sebek, Maruschka; Mulder, Arnout; Sloot, Rosa; van den Brandt, Anne-Marie; Enaimi, Mimount; Kremer, Kristin; Supply, Philip; van Soolingen, Dick

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  9. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  10. Fundamental length and relativistic length

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1988-01-01

    It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem

  11. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    DEFF Research Database (Denmark)

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke

    2012-01-01

    BACKGROUND:Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent...... steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. RESULTS...... to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. CONCLUSION:Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control...

  12. Flame Length

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — Flame length was modeled using FlamMap, an interagency fire behavior mapping and analysis program that computes potential fire behavior characteristics. The tool...

  13. Fundamental length

    International Nuclear Information System (INIS)

    Pradhan, T.

    1975-01-01

    The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)

  14. Cyclic codes of length 2 m

    Indian Academy of Sciences (India)

    Proceedings – Mathematical Sciences. Current Issue : Vol. 128, Issue 1 · Current Issue Volume 128 | Issue 1. March 2018. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Search · Editorial Board · Information for Authors · Subscription ...

  15. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  16. Modification and application of TOUGH2 as a variable-density, saturated-flow code and comparison to SWIFT II results

    International Nuclear Information System (INIS)

    Christian-Frear, T.L.; Webb, S.W.

    1995-01-01

    Human intrusion scenarios at the Waste Isolation Pilot Plant (WIPP) involve penetration of the repository and an underlying brine reservoir by a future borehole. Brine and gas from the brine reservoir and the repository may flow up the borehole and into the overlying Culebra formation, which is saturated with water containing different amounts of dissolved 'solids resulting in a spatially varying density. Current modeling approaches involve perturbing a steady-state Culebra flow field by inflow of gas and/or brine from a breach borehole that has passed through the repository. Previous studies simulating steady-state flow in the Culebra have been done. One specific study by LaVenue et al. (1990) used the SWIFT 2 code, a single-phase flow and transport code, to develop the steady-state flow field. Because gas may also be present in the fluids from the intrusion borehole, a two-phase code such as TOUGH2 can be used to determine the effect that emitted fluids may have on the steady-state Culebra flow field. Thus a comparison between TOUGH2 and SWIFT2 was prompted. In order to compare the two codes and to evaluate the influence of gas on flow in the Culebra, modifications were made to TOUGH2. Modifications were performed by the authors to allow for element-specific values of permeability, porosity, and elevation. The analysis also used a new equation of state module for a water-brine-air mixture, EOS7 (Pruess, 1991), which was developed to simulate variable water densities by assuming a miscible mixture of water and brine phases and allows for element-specific brine concentration in the INCON file

  17. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  18. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  19. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  20. Non-stationary recruitment dynamics of rainbow smelt: the influence of environmental variables and variation in size structure and length-at-maturation

    Science.gov (United States)

    Feiner, Zachary S.; Bunnell, David B.; Hook, Tomas O.; Madenjian, Charles P.; Warner, David M.; Collingsworth, Paris D.

    2015-01-01

    Fish stock-recruitment dynamics may be difficult to elucidate because of nonstationary relationships resulting from shifting environmental conditions and fluctuations in important vital rates such as individual growth or maturation. The Great Lakes have experienced environmental stressors that may have changed population demographics and stock-recruitment relationships while causing the declines of several prey fish species, including rainbow smelt (Osmerus mordax). We investigated changes in the size and maturation of rainbow smelt in Lake Michigan and Lake Huron and recruitment dynamics of the Lake Michigan stock over the past four decades. Mean lengths and length-at-maturation of rainbow smelt generally declined over time in both lakes. To evaluate recruitment, we used both a Ricker model and a Kalman filter-random walk (KF-RW) model which incorporated nonstationarity in stock productivity by allowing the productivity term to vary over time. The KF-RW model explained nearly four times more variation in recruitment than the Ricker model, indicating the productivity of the Lake Michigan stock has increased. By accounting for this nonstationarity, we were able identify significant variations in stock productivity, evaluate its importance to rainbow smelt recruitment, and speculate on potential environmental causes for the shift. Our results suggest that investigating mechanisms driving nonstationary shifts in stock-recruit relationships can provide valuable insights into temporal variation in fish population dynamics.

  1. Cross-sectional study on the weight and length of infants in the interior of the State of São Paulo, Brazil: associations with sociodemographic variables and breastfeeding

    Directory of Open Access Journals (Sweden)

    Julia Laura Delbue Bernardi

    Full Text Available CONTEXT AND OBJECTIVE: Increasing obesity is starting to occur among Brazilians. The aim of this study was to investigate the weight and length of children under two years of age in relation to sociodemographic variables and according to whether they were breastfed. DESIGN AND SETTING: Cross-sectional randomized study conducted in 2004-2005, based on the declaration of live births (SINASC in Campinas, Brazil. METHODS: 2,857 mothers of newborns were interviewed and answered a questionnaire seeking socioeconomic and breastfeeding information. The newborns' weights and lengths were measured at the end of the interviews and the body mass index was calculated. Percentiles ( 85 and Z-scores ( +1 were used for classification based on the new growth charts recommended by WHO (2006. The log-rank test, multiple linear regression and binomial test (Z were used. The statistical significance level used was 5%. RESULTS: The predominant social level was class C. The median for exclusive breastfeeding was 90 days; 61.25% of the children were between P15 and P85 for body mass index and 61.12% for length, respectively. Children whose mothers studied for nine to eleven years and children whose mothers were unemployed presented lower weight. Children whose mothers worked in health-related professions presented lower length when correlated with breastfeeding. CONCLUSION: The breastfeeding, maternal schooling and maternal occupation levels had an influence on nutrition status and indicated that obesity is occurring in early childhood among the infants living in the municipality.

  2. Cross-sectional study on the weight and length of infants in the interior of the state of São Paulo, Brazil: associations with sociodemographic variables and breastfeeding.

    Science.gov (United States)

    Bernardi, Julia Laura Delbue; Jordão, Regina Esteves; Barros Filho, Antônio de Azevedo

    2009-07-01

    Increasing obesity is starting to occur among Brazilians. The aim of this study was to investigate the weight and length of children under two years of age in relation to sociodemographic variables and according to whether they were breastfed. Cross-sectional randomized study conducted in 2004-2005, based on the declaration of live births (SINASC) in Campinas, Brazil. 2,857 mothers of newborns were interviewed and answered a questionnaire seeking socioeconomic and breastfeeding information. The newborns' weights and lengths were measured at the end of the interviews and the body mass index was calculated. Percentiles ( 85) and Z-scores ( +1) were used for classification based on the new growth charts recommended by WHO (2006). The log-rank test, multiple linear regression and binomial test (Z) were used. The statistical significance level used was 5%. The predominant social level was class C. The median for exclusive breastfeeding was 90 days; 61.25% of the children were between P15 and P85 for body mass index and 61.12% for length, respectively. Children whose mothers studied for nine to eleven years and children whose mothers were unemployed presented lower weight. Children whose mothers worked in health-related professions presented lower length when correlated with breastfeeding. The breastfeeding, maternal schooling and maternal occupation levels had an influence on nutrition status and indicated that obesity is occurring in early childhood among the infants living in the municipality.

  3. An inversion formula for the exponential Radon transform in spatial domain with variable focal-length fan-beam collimation geometry

    International Nuclear Information System (INIS)

    Wen Junhai; Liang Zhengrong

    2006-01-01

    Inverting the exponential Radon transform has a potential use for SPECT (single photon emission computed tomography) imaging in cases where a uniform attenuation can be approximated, such as in brain and abdominal imaging. Tretiak and Metz derived in the frequency domain an explicit inversion formula for the exponential Radon transform in two dimensions for parallel-beam collimator geometry. Progress has been made to extend the inversion formula for fan-beam and varying focal-length fan-beam (VFF) collimator geometries. These previous fan-beam and VFF inversion formulas require a spatially variant filtering operation, which complicates the implementation and imposes a heavy computing burden. In this paper, we present an explicit inversion formula, in which a spatially invariant filter is involved. The formula is derived and implemented in the spatial domain for VFF geometry (where parallel-beam and fan-beam geometries are two special cases). Phantom simulations mimicking SPECT studies demonstrate its accuracy in reconstructing the phantom images and efficiency in computation for the considered collimator geometries

  4. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  5. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  6. Comparison of Variable-Number Tandem-Repeat Markers typing and IS1245 Restriction Fragment Length Polymorphism fingerprinting of Mycobacterium avium subsp. hominissuis from human and porcine origins

    Directory of Open Access Journals (Sweden)

    Marttila Harri

    2010-03-01

    Full Text Available Abstract Background Animal mycobacterioses are regarded as a potential zoonotic risk and cause economical losses world wide. M. avium subsp. hominissuis is a slow-growing subspecies found in mycobacterial infected humans and pigs and therefore rapid and discriminatory typing methods are needed for epidemiological studies. The genetic similarity of M. avium subsp. hominissuis from human and porcine origins using two different typing methods have not been studied earlier. The objective of this study was to compare the IS1245 RFLP pattern and MIRU-VNTR typing to study the genetic relatedness of M. avium strains isolated from slaughter pigs and humans in Finland with regard to public health aspects. Methods A novel PCR-based genotyping method, variable number tandem repeat (VNTR typing of eight mycobacterial interspersed repetitive units (MIRUs, was evaluated for its ability to characterize Finnish Mycobacterium avium subsp. hominissuis strains isolated from pigs (n = 16 and humans (n = 13 and the results were compared with those obtained by the conventional IS1245 RFLP method. Results The MIRU-VNTR results showed a discriminatory index (DI of 0,92 and the IS1245 RFLP resulted in DI 0,98. The combined DI for both methods was 0,98. The MIRU-VNTR test has the advantages of being simple, reproducible, non-subjective, which makes it suitable for large-scale screening of M. avium strains. Conclusions Both typing methods demonstrated a high degree of similarity between the strains of human and porcine origin. The parallel application of the methods adds epidemiological value to the comparison of the strains and their origins. The present approach and results support the hypothesis that there is a common source of M. avium subsp. hominissuis infection for pigs and humans or alternatively one species may be the infective source to the other.

  7. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  8. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  9. Telomere Length and Mortality

    DEFF Research Database (Denmark)

    Kimura, Masayuki; Hjelmborg, Jacob V B; Gardner, Jeffrey P

    2008-01-01

    Leukocyte telomere length, representing the mean length of all telomeres in leukocytes, is ostensibly a bioindicator of human aging. The authors hypothesized that shorter telomeres might forecast imminent mortality in elderly people better than leukocyte telomere length. They performed mortality...

  10. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  11. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  12. Frequent LOH at hMLH1, a highly variable SNP in hMSH3, and negligible coding instability in ovarian cancer

    DEFF Research Database (Denmark)

    Arzimanoglou, I.I.; Hansen, L.L.; Chong, D.

    2002-01-01

    the mismatch DNA repair genes in ovarian cancer (OC), using a sensitive, accurate and reliable protocol we have developed. MATERIALS AND METHODS: A combination of high-resolution GeneScan software analysis and automated DNA cycle sequencing was used. RESULTS: Negligible coding MSI was observed in selected...

  13. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    Science.gov (United States)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  14. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  15. Comparative Study of IS6110 Restriction Fragment Length Polymorphism and Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Isolates in the Netherlands, Based on a 5-Year Nationwide Survey

    Science.gov (United States)

    de Beer, Jessica L.; van Ingen, Jakko; de Vries, Gerard; Erkens, Connie; Sebek, Maruschka; Mulder, Arnout; Sloot, Rosa; van den Brandt, Anne-Marie; Enaimi, Mimount; Kremer, Kristin; Supply, Philip

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a detailed evaluation on discriminatory power and agreement with findings in a cluster investigation was performed on 3,975 tuberculosis cases during the period of 2004 to 2008. The level of discrimination of the two typing methods did not differ substantially: RFLP typing yielded 2,733 distinct patterns compared to 2,607 in VNTR typing. The global concordance, defined as isolates labeled unique or identically distributed in clusters by both methods, amounted to 78.5% (n = 3,123). Of the remaining 855 cases, 12% (n = 479) of the cases were clustered only by VNTR, 7.7% (n = 305) only by RFLP typing, and 1.8% (n = 71) revealed different cluster compositions in the two approaches. A cluster investigation was performed for 87% (n = 1,462) of the cases clustered by RFLP. For the 740 cases with confirmed or presumed epidemiological links, 92% were concordant with VNTR typing. In contrast, only 64% of the 722 cases without an epidemiological link but clustered by RFLP typing were also clustered by VNTR typing. We conclude that VNTR typing has a discriminatory power equal to IS6110 RFLP typing but is in better agreement with findings in a cluster investigation performed on an RFLP-clustering-based cluster investigation. Both aspects make VNTR typing a suitable method for tuberculosis surveillance systems. PMID:23363841

  16. Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques.

    Science.gov (United States)

    Fitch, W T

    1997-08-01

    Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies ("formants") for the same animals. A new acoustic variable is proposed, "formant dispersion," which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.

  17. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  18. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  19. Chord length distribution for a compound capsule

    International Nuclear Information System (INIS)

    Pitřík, Pavel

    2017-01-01

    Chord length distribution is a factor important in the calculation of ionisation chamber responses. This article describes Monte Carlo calculations of the chord length distribution for a non-convex compound capsule. A Monte Carlo code was set up for generation of random chords and calculation of their lengths based on the input number of generations and cavity dimensions. The code was written in JavaScript and can be executed in the majority of HTML viewers. The plot of occurrence of cords of different lengths has 3 peaks. It was found that the compound capsule cavity cannot be simply replaced with a spherical cavity of a triangular design. Furthermore, the compound capsule cavity is directionally dependent, which must be taken into account in calculations involving non-isotropic fields of primary particles in the beam, unless equilibrium of the secondary charged particles is attained. (orig.)

  20. Gray Code for Cayley Permutations

    Directory of Open Access Journals (Sweden)

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  1. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  2. Variability in Deposition Rates and Mean Days of Hospitalization for the 100 Most Common Diagnostic Codes in U.S. Army Health Services Command Facilities.

    Science.gov (United States)

    1992-06-02

    artificially high degree of variability across regions. Mean bed days reflect provider behavior once the patient is hospitalized. However, variation in...0.12 3 4149 CHRONIC ISCHEMIC HEARR DISEASE :3.55 4.01 7.47 1.15 ’ 5243 ANOMALIES TOOTH POSITION 1.25 0.94 11.69 0.46 9 6565 POOR FETAL GROWTH 9.26

  3. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  4. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  5. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  6. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  7. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  8. Telomere length analysis.

    Science.gov (United States)

    Canela, Andrés; Klatt, Peter; Blasco, María A

    2007-01-01

    Most somatic cells of long-lived species undergo telomere shortening throughout life. Critically short telomeres trigger loss of cell viability in tissues, which has been related to alteration of tissue function and loss of regenerative capabilities in aging and aging-related diseases. Hence, telomere length is an important biomarker for aging and can be used in the prognosis of aging diseases. These facts highlight the importance of developing methods for telomere length determination that can be employed to evaluate telomere length during the human aging process. Telomere length quantification methods have improved greatly in accuracy and sensitivity since the development of the conventional telomeric Southern blot. Here, we describe the different methodologies recently developed for telomere length quantification, as well as their potential applications for human aging studies.

  9. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  10. Length dependent properties of SNS microbridges

    International Nuclear Information System (INIS)

    Sauvageau, J.E.; Jain, R.K.; Li, K.; Lukens, J.E.; Ono, R.H.

    1985-01-01

    Using an in-situ, self-aligned deposition scheme, arrays of variable length SNS junctions in the range of 0.05 μm to 1 μm have been fabricated. Arrays of SNS microbridges of lead-copper and niobium-copper fabricated using this technique have been used to study the length dependence, at constant temperature, of the critical current I and bridge resistance R /SUB d/ . For bridges with lengths pounds greater than the normal metal coherence length xi /SUB n/ (T), the dependence of I /SUB c/ on L is consistent with an exponential dependence on the reduced length l=L/xi /SUB n/ (T). For shorter bridges, deviations from this behavior is seen. It was also found that the bridge resistance R /SUB d/ does not vary linearly with the geometric bridge length but appears to approach a finite value as L→O

  11. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  12. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  13. Coded communications with nonideal interleaving

    Science.gov (United States)

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  14. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  15. Telomere length and depression

    DEFF Research Database (Denmark)

    Wium-Andersen, Marie Kim; Ørsted, David Dynnes; Rode, Line

    2017-01-01

    BACKGROUND: Depression has been cross-sectionally associated with short telomeres as a measure of biological age. However, the direction and nature of the association is currently unclear. AIMS: We examined whether short telomere length is associated with depression cross-sectionally as well...... as prospectively and genetically. METHOD: Telomere length and three polymorphisms, TERT, TERC and OBFC1, were measured in 67 306 individuals aged 20-100 years from the Danish general population and associated with register-based attendance at hospital for depression and purchase of antidepressant medication....... RESULTS: Attendance at hospital for depression was associated with short telomere length cross-sectionally, but not prospectively. Further, purchase of antidepressant medication was not associated with short telomere length cross-sectionally or prospectively. Mean follow-up was 7.6 years (range 0...

  16. Molecular cloning of full-length coding sequences and ...

    African Journals Online (AJOL)

    DR TONUKARI NYEROVWO

    structure and function of collagen, the distribution patterns of these two characteristic residues in α chains of ... the extracellular matrix. Besides ... number in collagen family and the major matrix protein in ..... Dashes represent missing residues.

  17. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  18. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  19. Neutron chain length distributions in subcritical systems

    International Nuclear Information System (INIS)

    Nolen, S.D.; Spriggs, G.

    1999-01-01

    In this paper, the authors present the results of the chain-length distribution as a function of k in subcritical systems. These results were obtained from a point Monte Carlo code and a three-dimensional Monte Carlo code, MC++. Based on these results, they then attempt to explain why several of the common neutron noise techniques, such as the Rossi-α and Feynman's variance-to-mean techniques, are difficult to perform in highly subcritical systems using low-efficiency detectors

  20. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  1. Extended fuel cycle length

    International Nuclear Information System (INIS)

    Bruyere, M.; Vallee, A.; Collette, C.

    1986-09-01

    Extended fuel cycle length and burnup are currently offered by Framatome and Fragema in order to satisfy the needs of the utilities in terms of fuel cycle cost and of overall systems cost optimization. We intend to point out the consequences of an increased fuel cycle length and burnup on reactor safety, in order to determine whether the bounding safety analyses presented in the Safety Analysis Report are applicable and to evaluate the effect on plant licensing. This paper presents the results of this examination. The first part indicates the consequences of increased fuel cycle length and burnup on the nuclear data used in the bounding accident analyses. In the second part of this paper, the required safety reanalyses are presented and the impact on the safety margins of different fuel management strategies is examined. In addition, systems modifications which can be required are indicated

  2. Comparison of a Variable-Number Tandem-Repeat (VNTR) Method for Typing Mycobacterium avium with Mycobacterial Interspersed Repetitive-Unit-VNTR and IS1245 Restriction Fragment Length Polymorphism Typing▿ †

    OpenAIRE

    Inagaki, Takayuki; Nishimori, Kei; Yagi, Tetsuya; Ichikawa, Kazuya; Moriyama, Makoto; Nakagawa, Taku; Shibayama, Takami; Uchiya, Kei-ichi; Nikai, Toshiaki; Ogawa, Kenji

    2009-01-01

    Mycobacterium avium complex (MAC) infections are increasing annually in various countries, including Japan, but the route of transmission and pathophysiology of the infection remain unclear. Currently, a variable-number tandem-repeat (VNTR) typing method using the Mycobacterium avium tandem repeat (MATR) loci (MATR-VNTR) is employed in Japan for epidemiological studies using clinical isolates of M. avium. In this study, the usefulness of this MATR-VNTR typing method was compared with that of ...

  3. Relativistic distances, sizes, lengths

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1992-01-01

    Such notion as light or retarded distance, field size, formation way, visible size of a body, relativistic or radar length and wave length of light from a moving atom are considered. The relation between these notions is cleared up, their classification is given. It is stressed that the formation way is defined by the field size of a moving particle. In the case of the electromagnetic field, longitudinal sizes increase proportionally γ 2 with growing charge velocity (γ is the Lorentz-factor). 18 refs

  4. Orthopedics coding and funding.

    Science.gov (United States)

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  5. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  6. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  7. Coding chaotic billiards. Pt. 3

    International Nuclear Information System (INIS)

    Ullmo, D.; Giannoni, M.J.

    1993-01-01

    Non-tiling compact billiard defined on the pseudosphere is studied 'a la Morse coding'. As for most bounded systems, the coding is non exact. However, two sets of approximate grammar rules can be obtained, one specifying forbidden codes, and the other allowed ones. In-between some sequences remain in the 'unknown' zone, but their relative amount can be reduced to zero if one lets the length of the approximate grammar rules goes to infinity. The relationship between these approximate grammar rules and the 'pruning front' introduced by Cvitanovic et al. is discussed. (authors). 13 refs., 10 figs., 1 tab

  8. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  9. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  10. Performance Analysis of CRC Codes for Systematic and Nonsystematic Polar Codes with List Decoding

    Directory of Open Access Journals (Sweden)

    Takumi Murata

    2018-01-01

    Full Text Available Successive cancellation list (SCL decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC decoding, provided that proper cyclic redundancy-check (CRC codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.

  11. User's manual for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the User's Manual for the TMAD code system, which includes the TMAD code and the LIBMAKR code. The TMAD code was commissioned to make it easier to interpret moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, also can be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code then will interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data

  12. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  13. Some new quasi-twisted ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2015-09-01

    Full Text Available Let [n, k, d]_q code be a linear code of length n, dimension k and minimum Hamming distance d over GF(q. One of the basic and most important problems in coding theory is to construct codes with best possible minimum distances. In this paper seven quasi-twisted ternary linear codes are constructed. These codes are new and improve the best known lower bounds on the minimum distance in [6].

  14. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  15. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  16. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  17. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  18. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  19. Pion nucleus scattering lengths

    International Nuclear Information System (INIS)

    Huang, W.T.; Levinson, C.A.; Banerjee, M.K.

    1971-09-01

    Soft pion theory and the Fubini-Furlan mass dispersion relations have been used to analyze the pion nucleon scattering lengths and obtain a value for the sigma commutator term. With this value and using the same principles, scattering lengths have been predicted for nuclei with mass number ranging from 6 to 23. Agreement with experiment is very good. For those who believe in the Gell-Mann-Levy sigma model, the evaluation of the commutator yields the value 0.26(m/sub σ//m/sub π/) 2 for the sigma nucleon coupling constant. The large dispersive corrections for the isosymmetric case implies that the basic idea behind many of the soft pion calculations, namely, slow variation of matrix elements from the soft pion limit to the physical pion mass, is not correct. 11 refs., 1 fig., 3 tabs

  20. Gap length distributions by PEPR

    International Nuclear Information System (INIS)

    Warszawer, T.N.

    1980-01-01

    Conditions guaranteeing exponential gap length distributions are formulated and discussed. Exponential gap length distributions of bubble chamber tracks first obtained on a CRT device are presented. Distributions of resulting average gap lengths and their velocity dependence are discussed. (orig.)

  1. Relativistic length agony continued

    Directory of Open Access Journals (Sweden)

    Redžić D.V.

    2014-01-01

    Full Text Available We made an attempt to remedy recent confusing treatments of some basic relativistic concepts and results. Following the argument presented in an earlier paper (Redžić 2008b, we discussed the misconceptions that are recurrent points in the literature devoted to teaching relativity such as: there is no change in the object in Special Relativity, illusory character of relativistic length contraction, stresses and strains induced by Lorentz contraction, and related issues. We gave several examples of the traps of everyday language that lurk in Special Relativity. To remove a possible conceptual and terminological muddle, we made a distinction between the relativistic length reduction and relativistic FitzGerald-Lorentz contraction, corresponding to a passive and an active aspect of length contraction, respectively; we pointed out that both aspects have fundamental dynamical contents. As an illustration of our considerations, we discussed briefly the Dewan-Beran-Bell spaceship paradox and the ‘pole in a barn’ paradox. [Projekat Ministarstva nauke Republike Srbije, br. 171028

  2. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  3. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  4. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  5. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Marc A. Armand

    2008-07-01

    Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over ℤ2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  6. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  7. Odd Length Contraction

    Science.gov (United States)

    Smarandache, Florentin

    2013-09-01

    Let's denote by VE the speed of the Earth and byVR the speed of the rocket. Both travel in the same direction on parallel trajectories. We consider the Earth as a moving (at a constant speed VE -VR) spacecraft of almost spherical form, whose radius is r and thus the diameter 2r, and the rocket as standing still. The non-proper length of Earth's diameter, as measured by the astronaut is: L = 2 r√{ 1 -|/VE -VR|2 c2 } rocket! Also, let's assume that the astronaut is laying down in the direction of motion. Therefore, he would also shrink, or he would die!

  8. discouraged by queue length

    Directory of Open Access Journals (Sweden)

    P. R. Parthasarathy

    2001-01-01

    Full Text Available The transient solution is obtained analytically using continued fractions for a state-dependent birth-death queue in which potential customers are discouraged by the queue length. This queueing system is then compared with the well-known infinite server queueing system which has the same steady state solution as the model under consideration, whereas their transient solutions are different. A natural measure of speed of convergence of the mean number in the system to its stationarity is also computed.

  9. New MDS or near MDS self-dual codes over finite fields

    OpenAIRE

    Tong, Hongxi; Wang, Xiaoqing

    2016-01-01

    The study of MDS self-dual codes has attracted lots of attention in recent years. There are many papers on determining existence of $q-$ary MDS self-dual codes for various lengths. There are not existence of $q-$ary MDS self-dual codes of some lengths, even these lengths $< q$. We generalize MDS Euclidean self-dual codes to near MDS Euclidean self-dual codes and near MDS isodual codes. And we obtain many new near MDS isodual codes from extended negacyclic duadic codes and we obtain many new M...

  10. Full length Research Article

    African Journals Online (AJOL)

    Dr Ahmed

    ABSTRACT. This paper considered contamination of aquifer resulting from petroleum spillage, which is a common phenomenal in the. Niger Delta area of Nigeria. We used the model given by. Bestman (1987) and assumed that some endogenous variables are built into the system. To achieve a level of desirable state, we ...

  11. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  12. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  13. A Graphical-User Interface for the U. S. Geological Survey's SUTRA Code using Argus ONE (for simulation of variable-density saturated-unsaturated ground-water flow with solute or energy transport)

    Science.gov (United States)

    Voss, Clifford I.; Boldt, David; Shapiro, Allen M.

    1997-01-01

    This report describes a Graphical-User Interface (GUI) for SUTRA, the U.S. Geological Survey (USGS) model for saturated-unsaturated variable-fluid-density ground-water flow with solute or energy transport,which combines a USGS-developed code that interfaces SUTRA with Argus ONE, a commercial software product developed by Argus Interware. This product, known as Argus Open Numerical Environments (Argus ONETM), is a programmable system with geographic-information-system-like (GIS-like) functionality that includes automated gridding and meshing capabilities for linking geospatial information with finite-difference and finite-element numerical model discretizations. The GUI for SUTRA is based on a public-domain Plug-In Extension (PIE) to Argus ONE that automates the use of ArgusONE to: automatically create the appropriate geospatial information coverages (information layers) for SUTRA, provide menus and dialogs for inputting geospatial information and simulation control parameters for SUTRA, and allow visualization of SUTRA simulation results. Following simulation control data and geospatial data input bythe user through the GUI, ArgusONE creates text files in a format required for normal input to SUTRA,and SUTRA can be executed within the Argus ONE environment. Then, hydraulic head, pressure, solute concentration, temperature, saturation and velocity results from the SUTRA simulation may be visualized. Although the GUI for SUTRA discussed in this report provides all of the graphical pre- and post-processor functions required for running SUTRA, it is also possible for advanced users to apply programmable features within Argus ONE to modify the GUI to meet the unique demands of particular ground-water modeling projects.

  14. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  15. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  16. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  17. Development of the Heated Length Correction Factor

    International Nuclear Information System (INIS)

    Park, Ho-Young; Kim, Kang-Hoon; Nahm, Kee-Yil; Jung, Yil-Sup; Park, Eung-Jun

    2008-01-01

    The Critical Heat Flux (CHF) on a nuclear fuel is defined by the function of flow channel geometry and flow condition. According to the selection of the explanatory variable, there are three hypotheses to explain CHF at uniformly heated vertical rod (inlet condition hypothesis, exit condition hypothesis, local condition hypothesis). For inlet condition hypothesis, CHF is characterized by function of system pressure, rod diameter, rod length, mass flow and inlet subcooling. For exit condition hypothesis, exit quality substitutes for inlet subcooling. Generally the heated length effect on CHF in exit condition hypothesis is smaller than that of other variables. Heated length is usually excluded in local condition hypothesis to describe the CHF with only local fluid conditions. Most of commercial plants currently use the empirical CHF correlation based on local condition hypothesis. Empirical CHF correlation is developed by the method of fitting the selected sensitive local variables to CHF test data using the multiple non-linear regression. Because this kind of method can not explain physical meaning, it is difficult to reflect the proper effect of complex geometry. So the recent CHF correlation development strategy of nuclear fuel vendor is making the basic CHF correlation which consists of basic flow variables (local fluid conditions) at first, and then the geometrical correction factors are compensated additionally. Because the functional forms of correction factors are determined from the independent test data which represent the corresponding geometry separately, it can be applied to other CHF correlation directly only with minor coefficient modification

  18. Performance of RC columns with partial length corrosion

    International Nuclear Information System (INIS)

    Wang Xiaohui; Liang Fayun

    2008-01-01

    Experimental and analytical studies on the load capacity of reinforced concrete (RC) columns with partial length corrosion are presented, where only a fraction of the column length was corroded. Twelve simply supported columns were eccentrically loaded. The primary variables were partial length corrosion in tensile or compressive zone and the corrosion level within this length. The failure of the corroded column occurs in the partial length, mainly developed from or located nearby or merged with the longitudinal corrosion cracks. For RC column with large eccentricity, load capacity of the column is mainly influenced by the partial length corrosion in tensile zone; while for RC column with small eccentricity, load capacity of the column greatly decreases due to the partial length corrosion in compressive zone. The destruction of the longitudinally mechanical integrality of the column in the partial length leads to this great reduction of the load capacity of the RC column

  19. Linking CATHENA with other computer codes through a remote process

    Energy Technology Data Exchange (ETDEWEB)

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)

    2005-07-01

    starts, ends, controls, receives boundary conditions from CATHENA, calls ELOCA-IST subroutines for computation and sends feedback to CATHENA through PVM calls. The benefit of this dynamic link is that CATHENA's GENeralized Heat Transfer Package (GENHTP) is replaced with a specialized detailed model for CANDU fuel elements. The stand-alone plant conTROL Gentilly-2 (TROLG2) program, developed jointly by AECL and Hydro-Quebec, simulates the control system of the Gentilly-2 generating station operated by Hydro Quebec. The dynamic link with a CATHENA plant idealization couples the thermalhydraulic reactor behavior to reactor control system behavior of the Gentilly-2 generating station plant during transient conditions. CATHENA can perform simulations of CANDU channels by dynamically linking with one or more ELOCA driver programs. Each link to an independent instance of the ELOCA driver program is associated with one fuel element having up to 20 axial nodes (current ELOCA-IST limit) and one circumferential segment. Figure 1 in the full paper shows graphically the data transfers involved in the connection between the CATHENA and ELOCA driver through the PVM interface. Variables transferred from CATHENA to ELOCA-IST at each time step are: number of axial segments; number of circumferential segments (currently one only); coolant pressure; coolant temperature; sheath-to-coolant heat transfer coefficient; thermal radiation heat flux; and, power fraction. Variables that are returned for each axial segment from ELOCA-IST are: fuel sheath temperature; fuel element outer diameter; and, fuel length. CATHENA linked with ELOCA through PVM allows independent development of separate codes and achieves direct coupling during execution ensuring convergence between the codes. This coupling also eliminates the preparation and conversion of data transfer necessary between the codes by an analyst. This coupling process saves analyst time while reducing the possibility of inadvertent errors

  20. Linking CATHENA with other computer codes through a remote process

    International Nuclear Information System (INIS)

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.

    2005-01-01

    , controls, receives boundary conditions from CATHENA, calls ELOCA-IST subroutines for computation and sends feedback to CATHENA through PVM calls. The benefit of this dynamic link is that CATHENA's GENeralized Heat Transfer Package (GENHTP) is replaced with a specialized detailed model for CANDU fuel elements. The stand-alone plant conTROL Gentilly-2 (TROLG2) program, developed jointly by AECL and Hydro-Quebec, simulates the control system of the Gentilly-2 generating station operated by Hydro Quebec. The dynamic link with a CATHENA plant idealization couples the thermalhydraulic reactor behavior to reactor control system behavior of the Gentilly-2 generating station plant during transient conditions. CATHENA can perform simulations of CANDU channels by dynamically linking with one or more ELOCA driver programs. Each link to an independent instance of the ELOCA driver program is associated with one fuel element having up to 20 axial nodes (current ELOCA-IST limit) and one circumferential segment. Figure 1 in the full paper shows graphically the data transfers involved in the connection between the CATHENA and ELOCA driver through the PVM interface. Variables transferred from CATHENA to ELOCA-IST at each time step are: number of axial segments; number of circumferential segments (currently one only); coolant pressure; coolant temperature; sheath-to-coolant heat transfer coefficient; thermal radiation heat flux; and, power fraction. Variables that are returned for each axial segment from ELOCA-IST are: fuel sheath temperature; fuel element outer diameter; and, fuel length. CATHENA linked with ELOCA through PVM allows independent development of separate codes and achieves direct coupling during execution ensuring convergence between the codes. This coupling also eliminates the preparation and conversion of data transfer necessary between the codes by an analyst. This coupling process saves analyst time while reducing the possibility of inadvertent errors and additionally

  1. Variable Length Inflatable Ramp Launch and Recovery System

    Science.gov (United States)

    2016-09-22

    May 2017 The below identified patent application is available for licensing. Requests for information should be addressed to: TECHNOLOGY...royalties thereon or therefor. CROSS REFERENCE TO OTHER PATENT APPLICATIONS [0002] None. BACKGROUND OF THE INVENTION (1) Field of the Invention...architectures are recommended in accordance with United States Patent No. 8,555,472 and the progeny of this referenced patent . The air beams 20

  2. Learning Path Recommendation Based on Modified Variable Length Genetic Algorithm

    Science.gov (United States)

    Dwivedi, Pragya; Kant, Vibhor; Bharadwaj, Kamal K.

    2018-01-01

    With the rapid advancement of information and communication technologies, e-learning has gained a considerable attention in recent years. Many researchers have attempted to develop various e-learning systems with personalized learning mechanisms for assisting learners so that they can learn more efficiently. In this context, curriculum sequencing…

  3. Chaotic orbits of a pendulum with variable length

    Directory of Open Access Journals (Sweden)

    Massimo Furi

    2004-03-01

    Full Text Available The main purpose of this investigation is to show that a pendulum, whose pivot oscillates vertically in a periodic fashion, has uncountably many chaotic orbits. The attribute chaotic is given according to the criterion we now describe. First, we associate to any orbit a finite or infinite sequence as follows. We write 1 or $-1$ every time the pendulum crosses the position of unstable equilibrium with positive (counterclockwise or negative (clockwise velocity, respectively. We write 0 whenever we find a pair of consecutive zero's of the velocity separated only by a crossing of the stable equilibrium, and with the understanding that different pairs cannot share a common time of zero velocity. Finally, the symbol $omega$, that is used only as the ending symbol of a finite sequence, indicates that the orbit tends asymptotically to the position of unstable equilibrium. Every infinite sequence of the three symbols ${1,-1,0}$ represents a real number of the interval $[0,1]$ written in base 3 when $-1$ is replaced with 2. An orbit is considered chaotic whenever the associated sequence of the three symbols ${1,2,0}$ is an irrational number of $[0,1]$. Our main goal is to show that there are uncountably many orbits of this type.

  4. Ultrasonographic assessment of renal length in 310 Turkish children ...

    African Journals Online (AJOL)

    Ultrasonography is a non-invasive modality that can be used to measure RL.[2] ... cases were selected for inclusion in the study. Ultrasonography was ... Linear regression equations for predicting a variable (renal length) from independent ...

  5. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  6. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  7. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  8. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  9. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  10. Kneser-Hecke-operators in coding theory

    OpenAIRE

    Nebe, Gabriele

    2005-01-01

    The Kneser-Hecke-operator is a linear operator defined on the complex vector space spanned by the equivalence classes of a family of self-dual codes of fixed length. It maps a linear self-dual code $C$ over a finite field to the formal sum of the equivalence classes of those self-dual codes that intersect $C$ in a codimension 1 subspace. The eigenspaces of this self-adjoint linear operator may be described in terms of a coding-theory analogue of the Siegel $\\Phi $-operator.

  11. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  12. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  13. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  14. Generation of Length Distribution, Length Diagram, Fibrogram, and Statistical Characteristics by Weight of Cotton Blends

    Directory of Open Access Journals (Sweden)

    B. Azzouz

    2007-01-01

    Full Text Available The textile fibre mixture as a multicomponent blend of variable fibres imposes regarding the proper method to predict the characteristics of the final blend. The length diagram and the fibrogram of cotton are generated. Then the length distribution, the length diagram, and the fibrogram of a blend of different categories of cotton are determined. The length distributions by weight of five different categories of cotton (Egyptian, USA (Pima, Brazilian, USA (Upland, and Uzbekistani are measured by AFIS. From these distributions, the length distribution, the length diagram, and the fibrogram by weight of four binary blends are expressed. The length parameters of these cotton blends are calculated and their variations are plotted against the mass fraction x of one component in the blend .These calculated parameters are compared to those of real blends. Finally, the selection of the optimal blends using the linear programming method, based on the hypothesis that the cotton blend parameters vary linearly in function of the components rations, is proved insufficient.

  15. Bar Coding and Tracking in Pathology.

    Science.gov (United States)

    Hanna, Matthew G; Pantanowitz, Liron

    2016-03-01

    Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  17. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  18. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  19. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  20. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    Science.gov (United States)

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  1. Circular codes revisited: a statistical approach.

    Science.gov (United States)

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  3. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  4. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  5. Axial Length/Corneal Radius of Curvature Ratio and Refractive ...

    African Journals Online (AJOL)

    2017-12-05

    Dec 5, 2017 ... variously described as determined by the ocular biometric variables. There have been many studies on the relationship between refractive error and ocular axial length (AL), anterior chamber depth, corneal radius of curvature (CR), keratometric readings as well as other ocular biometric variables such as ...

  6. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  7. Reconciliation of international administrative coding systems for comparison of colorectal surgery outcome.

    Science.gov (United States)

    Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O

    2014-07-01

    Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  8. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  9. Numerical Simulations of Finite-Length Effects in Diocotron Modes

    Science.gov (United States)

    Mason, Grant W.; Spencer, Ross L.

    2000-10-01

    Over a decade ago Driscoll and Fine(C. F. Driscoll and K. S. Fine, Phys. Fluids B 2) (6), 1359, June 1990. reported experimental observations of an exponential instability in the self-shielded m=1 diocotron mode for an electron plasma confined in a Malmberg-Penning trap. More recently, Finn et al(John M. Finn, Diego del-Castillo-Negrete and Daniel C. Barnes, Phys. Plasmas 6) (10), 3744, October 1999. have given a theoretical explanation of the instability as a finite-length end effect patterned after an analogy to theory for shallow water fluid vortices. However, in a test case selected for comparison, the growth rate in the experiment exceeds the theoretical value by a factor of two. We present results from a two-dimensional, finite length drift-kinetic code and a fully three-dimensional particle-in-cell code written to explore details of finite-length effects in diocotron modes.

  10. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  11. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra.......A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF(2^{3}). In the sequence there is, for instance, a code of length 21 and dimension10with minimal distance9, and a code of length21and dimension16with minimal...... distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre...

  12. Sperm length evolution in the fungus-growing ants

    DEFF Research Database (Denmark)

    Baer, B.; Dijkstra, M. B.; Mueller, U. G.

    2009-01-01

    -growing ants, representing 9 of the 12 recognized genera, and mapped these onto the ant phylogeny. We show that average sperm length across species is highly variable and decreases with mature colony size in basal genera with singly mated queens, suggesting that sperm production or storage constraints affect...... the evolution of sperm length. Sperm length does not decrease further in multiply mating leaf-cutting ants, despite substantial further increases in colony size. In a combined analysis, sexual dimorphism explained 63.1% of the variance in sperm length between species. As colony size was not a significant...... predictor in this analysis, we conclude that sperm production trade-offs in males have been the major selective force affecting sperm length across the fungus-growing ants, rather than storage constraints in females. The relationship between sperm length and sexual dimorphism remained robust...

  13. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  14. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  15. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  16. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  17. TMRBAR power balance code for tandem mirror reactors

    International Nuclear Information System (INIS)

    Blackkfield, D.T.; Campbell, R.; Fenstermacher, M.; Bulmer, R.; Perkins, L.; Peng, Y.K.M.; Reid, R.L.; Wu, K.F.

    1984-01-01

    A revised version of the tandem mirror multi-point code TMRBAR developed at LLNL has been used to examine various reactor designs using MARS-like ''c'' coils. We solve 14 to 16 non-linear equations to obtain the densities, temperatures, plasma potential and magnetic field on axis at the cardinal points. Since ICRH, ECRH, and neutral beams may be used to stabilize the central cell, various combinations of rf and neutral beam powers may satisfy the physics. To select a desired set of physics parameters, we use nonlinear optimization techniques. Whit these routines, we minimize or maximize a physics variable subject to the physics constraints being satisfied. For example, for a given fusion power we may find the minimum length needed to have an ignited central cell or the maximum fusion Q. Finally, we have coupled this physics model to the LLNL magnetics-MHD code. This code runs the EFFI magnetic field generator and uses TEBASCO to calculate 1-D MHD equilibria and stability

  18. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    Science.gov (United States)

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in

  19. ETFOD: a point model physics code with arbitrary input

    International Nuclear Information System (INIS)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code

  20. Quantum BCH Codes Based on Spectral Techniques

    International Nuclear Information System (INIS)

    Guo Ying; Zeng Guihua

    2006-01-01

    When the time variable in quantum signal processing is discrete, the Fourier transform exists on the vector space of n-tuples over the Galois field F 2 , which plays an important role in the investigation of quantum signals. By using Fourier transforms, the idea of quantum coding theory can be described in a setting that is much different from that seen that far. Quantum BCH codes can be defined as codes whose quantum states have certain specified consecutive spectral components equal to zero and the error-correcting ability is also described by the number of the consecutive zeros. Moreover, the decoding of quantum codes can be described spectrally with more efficiency.

  1. Rate-Compatible Protograph LDPC Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  2. Influence of recording length on reporting status

    DEFF Research Database (Denmark)

    Biltoft-Jensen, Anja Pia; Matthiessen, Jeppe; Fagt, Sisse

    2009-01-01

    : To investigate the impact of recording length on reporting status, expressed as the ratio between energy intake and calculated basal metabolic rate (EI/BMR), the percentage of consumers of selected food items and the number reported food items per meal and eating occasions per day. Methods: Data from two...... in a validation study and the Danish National Survey of Dietary Habits and Physical Activity 2000-2002, respectively. Both studies had a cross-sectional design. Volunteers and participants completed a pre-coded food diary every day for 7 consecutive days. BMR was predicted from equations. Results......: In the validation study, EI/BMR was significantly lower on 1st, 2nd and 3rd consecutive recording days compared to 4-7 recording days (P food items...

  3. Variability through the Eyes of the Programmer

    DEFF Research Database (Denmark)

    Melo, Jean; Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2017-01-01

    Preprocessor directives (#ifdefs) are often used to implement compile-time variability, despite the critique that they increase complexity, hamper maintainability, and impair code comprehensibility. Previous studies have shown that the time of bug finding increases linearly with variability. Howe...

  4. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  5. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  6. Variability and transmission by Aphis glycines of North American and Asian Soybean mosaic virus isolates.

    Science.gov (United States)

    Domier, L L; Latorre, I J; Steinlage, T A; McCoppin, N; Hartman, G L

    2003-10-01

    The variability of North American and Asian strains and isolates of Soybean mosaic virus was investigated. First, polymerase chain reaction (PCR) products representing the coat protein (CP)-coding regions of 38 SMVs were analyzed for restriction fragment length polymorphisms (RFLP). Second, the nucleotide and predicted amino acid sequence variability of the P1-coding region of 18 SMVs and the helper component/protease (HC/Pro) and CP-coding regions of 25 SMVs were assessed. The CP nucleotide and predicted amino acid sequences were the most similar and predicted phylogenetic relationships similar to those obtained from RFLP analysis. Neither RFLP nor sequence analyses of the CP-coding regions grouped the SMVs by geographical origin. The P1 and HC/Pro sequences were more variable and separated the North American and Asian SMV isolates into two groups similar to previously reported differences in pathogenic diversity of the two sets of SMV isolates. The P1 region was the most informative of the three regions analyzed. To assess the biological relevance of the sequence differences in the HC/Pro and CP coding regions, the transmissibility of 14 SMV isolates by Aphis glycines was tested. All field isolates of SMV were transmitted efficiently by A. glycines, but the laboratory isolates analyzed were transmitted poorly. The amino acid sequences from most, but not all, of the poorly transmitted isolates contained mutations in the aphid transmission-associated DAG and/or KLSC amino acid sequence motifs of CP and HC/Pro, respectively.

  7. Correcting length-frequency distributions for imperfect detection

    Science.gov (United States)

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data

  8. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  9. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  10. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  11. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  12. Low Complexity Tail-Biting Trellises for Some Extremal Self-Dual Codes

    OpenAIRE

    Olocco , Grégory; Otmani , Ayoub

    2002-01-01

    International audience; We obtain low complexity tail-biting trellises for some extremal self-dual codes for various lengths and fields such as the [12,6,6] ternary Golay code and a [24,12,8] Hermitian self-dual code over GF(4). These codes are obtained from a particular family of cyclic Tanner graphs called necklace factor graphs.

  13. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  14. On the subfield subcodes of Hermitian codes

    DEFF Research Database (Denmark)

    Pinero, Fernando; Janwa, Heeralal

    2014-01-01

    We present a fast algorithm using Gröbner basis to compute the dimensions of subfield subcodes of Hermitian codes. With these algorithms we are able to compute the exact values of the dimension of all subfield subcodes up to q ≤ 32 and length up to 215. We show that some of the subfield subcodes ...

  15. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  16. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  17. Turbulence closure for mixing length theories

    Science.gov (United States)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  18. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  19. Survey of nuclear fuel-cycle codes

    International Nuclear Information System (INIS)

    Thomas, C.R.; de Saussure, G.; Marable, J.H.

    1981-04-01

    A two-month survey of nuclear fuel-cycle models was undertaken. This report presents the information forthcoming from the survey. Of the nearly thirty codes reviewed in the survey, fifteen of these codes have been identified as potentially useful in fulfilling the tasks of the Nuclear Energy Analysis Division (NEAD) as defined in their FY 1981-1982 Program Plan. Six of the fifteen codes are given individual reviews. The individual reviews address such items as the funding agency, the author and organization, the date of completion of the code, adequacy of documentation, computer requirements, history of use, variables that are input and forecast, type of reactors considered, part of fuel cycle modeled and scope of the code (international or domestic, long-term or short-term, regional or national). The report recommends that the Model Evaluation Team perform an evaluation of the EUREKA uranium mining and milling code

  20. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  1. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  2. TERRA Expression Levels Do Not Correlate With Telomere Length and Radiation Sensitivity in Human Cancer Cell Lines

    Directory of Open Access Journals (Sweden)

    Alexandra eSmirnova

    2013-05-01

    Full Text Available Mammalian telomeres are transcribed into long non-coding telomeric RNA molecules (TERRA that seem to play a role in the maintenance of telomere stability. In human cells, CpG island promoters drive TERRA transcription and are regulated by methylation. It was suggested that the amount of TERRA may be related to telomere length. To test this hypothesis we measured telomere length and TERRA levels in single clones isolated from five human cell lines: HeLa (cervical carcinoma, BRC-230 (breast cancer, AKG and GK2 (gastric cancers and GM847 (SV40 immortalized skin fibroblasts. We observed great clonal heterogeneity both in TRF (Terminal Restriction Fragment length and in TERRA levels. However, these two parameters did not correlate with each other. Moreover, cell survival to γ-rays did not show a significant variation among the clones, suggesting that, in this cellular system, the intra-population variability in telomere length and TERRA levels does not influence sensitivity to ionizing radiation. This conclusion was supported by the observation that in a cell line in which telomeres were greatly elongated by the ectopic expression of telomerase, TERRA expression levels and radiation sensitivity were similar to the parental HeLa cell line.

  3. Does length or neighborhood size cause the word length effect?

    Science.gov (United States)

    Jalbert, Annie; Neath, Ian; Surprenant, Aimée M

    2011-10-01

    Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory.

  4. Keeping disease at arm's length

    DEFF Research Database (Denmark)

    Lassen, Aske Juul

    2015-01-01

    active ageing change everyday life with chronic disease, and how do older people combine an active life with a range of chronic diseases? The participants in the study use activities to keep their diseases at arm’s length, and this distancing of disease at the same time enables them to engage in social...... and physical activities at the activity centre. In this way, keeping disease at arm’s length is analysed as an ambiguous health strategy. The article shows the importance of looking into how active ageing is practised, as active ageing seems to work well in the everyday life of the older people by not giving...... emphasis to disease. The article is based on ethnographic fieldwork and uses vignettes of four participants to show how they each keep diseases at arm’s length....

  5. CEBAF Upgrade Bunch Length Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Mahmoud [Old Dominion Univ., Norfolk, VA (United States)

    2016-05-01

    Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.

  6. The determinants of IPO firm prospectus length in Africa

    Directory of Open Access Journals (Sweden)

    Bruce Hearn

    2013-04-01

    Full Text Available This paper studies the differential impact on IPO firm listing prospectus length from increasing proportions of foreign directors from civil as opposed to common law societies and social elites. Using a unique hand-collected and comprehensive sample of 165 IPO firms from across 18 African countries the evidence suggests that increasing proportions of directors from civil code law countries is associated with shorter prospectuses while the opposite is true for their common law counterparts. Furthermore increasing proportions of directors drawn from elevated social positions in indigenous society is related to increasing prospectus length in North Africa while being insignificant in SSA.

  7. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  8. Kondo length in bosonic lattices

    Science.gov (United States)

    Giuliano, Domenico; Sodano, Pasquale; Trombettoni, Andrea

    2017-09-01

    Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real-space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.

  9. Continuous lengths of oxide superconductors

    Science.gov (United States)

    Kroeger, Donald M.; List, III, Frederick A.

    2000-01-01

    A layered oxide superconductor prepared by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon. A continuous length of a second substrate ribbon is overlaid on the first substrate ribbon. Sufficient pressure is applied to form a bound layered superconductor precursor powder between the first substrate ribbon and the second substrate ribbon. The layered superconductor precursor is then heat treated to establish the oxide superconducting phase. The layered oxide superconductor has a smooth interface between the substrate and the oxide superconductor.

  10. Summary of neutron scattering lengths

    International Nuclear Information System (INIS)

    Koester, L.

    1981-12-01

    All available neutron-nuclei scattering lengths are collected together with their error bars in a uniform way. Bound scattering lengths are given for the elements, the isotopes, and the various spin-states. They are discussed in the sense of their use as basic parameters for many investigations in the field of nuclear and solid state physics. The data bank is available on magnetic tape, too. Recommended values and a map of these data serve for an uncomplicated use of these quantities. (orig.)

  11. Overview of bunch length measurements

    International Nuclear Information System (INIS)

    Lumpkin, A. H.

    1999-01-01

    An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed

  12. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  13. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  14. Binary codes with impulse autocorrelation functions for dynamic experiments

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-09-01

    A series of binary codes exist which have autocorrelation functions approximating to an impulse function. Signals whose behaviour in time can be expressed by such codes have spectra which are 'whiter' over a limited bandwidth and for a finite time than signals from a white noise generator. These codes are used to determine system dynamic responses using the correlation technique. Programmes have been written to compute codes of arbitrary length and to compute 'cyclic' autocorrelation and cross-correlation functions. Complete listings of these programmes are given, and a code of 1019 bits is presented. (author)

  15. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  16. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  17. Diet, nutrition and telomere length.

    Science.gov (United States)

    Paul, Ligi

    2011-10-01

    The ends of human chromosomes are protected by DNA-protein complexes termed telomeres, which prevent the chromosomes from fusing with each other and from being recognized as a double-strand break by DNA repair proteins. Due to the incomplete replication of linear chromosomes by DNA polymerase, telomeric DNA shortens with repeated cell divisions until the telomeres reach a critical length, at which point the cells enter senescence. Telomere length is an indicator of biological aging, and dysfunction of telomeres is linked to age-related pathologies like cardiovascular disease, Parkinson disease, Alzheimer disease and cancer. Telomere length has been shown to be positively associated with nutritional status in human and animal studies. Various nutrients influence telomere length potentially through mechanisms that reflect their role in cellular functions including inflammation, oxidative stress, DNA integrity, DNA methylation and activity of telomerase, the enzyme that adds the telomeric repeats to the ends of the newly synthesized DNA. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Tube Length and Water Flow

    Directory of Open Access Journals (Sweden)

    Ben Ruktantichoke

    2011-06-01

    Full Text Available In this study water flowed through a straight horizontal plastic tube placed at the bottom of a large tank of water. The effect of changing the length of tubing on the velocity of flow was investigated. It was found that the Hagen-Poiseuille Equation is valid when the effect of water entering the tube is accounted for.

  19. Finite length Taylor Couette flow

    Science.gov (United States)

    Streett, C. L.; Hussaini, M. Y.

    1987-01-01

    Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.

  20. Influence of Code Size Variation on the Performance of 2D Hybrid ZCC/MD in OCDMA System

    Directory of Open Access Journals (Sweden)

    Matem Rima.

    2018-01-01

    Full Text Available Several two dimensional OCDMA have been developed in order to overcome many problems in optical network, enhancing cardinality, suppress Multiple Access Interference (MAI and mitigate Phase Induced Intensity Noise (PIIN. This paper propose a new 2D hybrid ZCC/MD code combining between 1D ZCC spectral encoding where M is its code length and 1D MD spatial spreading where N is its code length. The spatial spreading (N code length offers a good cardinality so it represents the main effect to enhance the performance of the system compared to the spectral (M code length according to the numerical results.

  1. COMPARATIVE ANALYSIS OF THE METHODS FOR EVALUATING THE EFFECTIVE LENGTH OF COLUMNS

    OpenAIRE

    Paschal Chimeremeze Chiadighikaobi

    2017-01-01

    This article looks into the effective length of columns using different methods. The codes in use in this article are those from the AISC (American Institute of Steel Construction). And that of AS 4100 (Australian Steel code). A conclusion was drawn after investigating a frame using three different methods. Solved Exercise 6 (LeMessurier Method) was investigated using same frame but different dimension. Further analysis and investigation will be done using Java codes to analyze the frames.

  2. COMPARATIVE ANALYSIS OF THE METHODS FOR EVALUATING THE EFFECTIVE LENGTH OF COLUMNS

    Directory of Open Access Journals (Sweden)

    Paschal Chimeremeze Chiadighikaobi

    2017-08-01

    Full Text Available This article looks into the effective length of columns using different methods. The codes in use in this article are those from the AISC (American Institute of Steel Construction. And that of AS 4100 (Australian Steel code. A conclusion was drawn after investigating a frame using three different methods. Solved Exercise 6 (LeMessurier Method was investigated using same frame but different dimension. Further analysis and investigation will be done using Java codes to analyze the frames.

  3. Performance of Product Codes and Related Structures with Iterated Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2011-01-01

    Several modifications of product codes have been suggested as standards for optical networks. We show that the performance exhibits a threshold that can be estimated from a result about random graphs. For moderate input bit error probabilities, the output error rates for codes of finite length can...

  4. Bit-Wise Arithmetic Coding For Compression Of Data

    Science.gov (United States)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  5. Variabilidade do risco do tempo de permanência ajustado para lactentes de muito baixo peso ao nascer entre centros da Neocosur South American Network Center variability in risk of adjusted length of stay for very low birth weight infants in the Neocosur South American Network

    Directory of Open Access Journals (Sweden)

    Guillermo Marshall

    2012-12-01

    Full Text Available OBJETIVOS: Desenvolver um modelo de predição para o tempo de permanência hospitalar (TPH em lactentes de muito baixo peso ao nascer (MBPN e comparar esse resultado entre 20 centros de uma rede neonatal, visto que o TPH é utilizado como uma medida da qualidade da assistência em lactentes de MBPN. MÉTODOS: Utilizamos dados coletados prospectivamente de 7.599 lactentes com peso ao nascer entre 500 e 1.500 g no período entre os anos de 2001 a 2008. O modelo de regressão de Cox foi empregado para desenvolver dois modelos de predição: um modelo prévio com dados do nascimento e outro posterior, que acrescenta morbidades relevantes dos primeiros 30 dias de vida. RESULTADOS: A mediana do TPH estimado e ajustado a partir do nascimento foi de 59 dias; 28 dias depois do tempo de sobrevida de 30 dias. Houve uma alta correlação entre os modelos (r = 0,92. O TPH esperado e o TPH observado variaram bastante entre os centros, mesmo depois de correção para as morbidades relevantes após 30 dias. O TPH mediano (variação: 45-70 dias e a idade concepcional na alta hospitalar (variação: 36,4-39,9 semanas refletem uma variabilidade alta entre centros. CONCLUSÃO: Um modelo simples, com fatores apresentados no nascimento, pode predizer o TPH de um lactente de MBPN em uma rede neonatal. Observou-se uma variabilidade nos TPHs considerável entre unidades de terapia intensiva neonatal. Especulamos que os resultados sejam provenientes das diferenças entre as práticas dos centros.OBJECTIVES: To develop a prediction model for hospital length of stay (LOS in very low birth weight (VLBW infants and to compare this outcome among 20 centers within a neonatal network. METHODS: Data from 7,599 infants with birth weights of 500-1,500 g born between the years 2001-2008 were prospectively collected. The Cox regression model was employed to develop two prediction models: an early model based upon variables present at birth, and a late one that adds relevant

  6. Optimization of fracture length in gas/condensate reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, J.; Sharma, M.M.; Pope, G.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Texas Univ., Austin, TX (United States)

    2006-07-01

    A common practice that improves the productivity of gas-condensate reservoirs is hydraulic fracturing. Two important variables that determine the effectiveness of hydraulic fractures are fracture length and fracture conductivity. Although there are no simple guidelines for the optimization of fracture length and the factors that affect it, it is preferable to have an optimum fracture length for a given proppant volume in order to maximize productivity. An optimization study was presented in which fracture length was estimated at wells where productivity was maximized. An analytical expression that takes into account non-Darcy flow and condensate banking was derived. This paper also reviewed the hydraulic fracturing process and discussed previous simulation studies that investigated the effects of well spacing and fracture length on well productivity in low permeability gas reservoirs. The compositional simulation study and results and discussion were also presented. The analytical expression for optimum fracture length, analytical expression with condensate dropout, and equations for the optimum fracture length with non-Darcy flow in the fracture were included in an appendix. The Computer Modeling Group's GEM simulator, an equation-of-state compositional simulator, was used in this study. It was concluded that for cases with non-Darcy flow, the optimum fracture lengths are lower than those obtained with Darcy flow. 18 refs., 5 tabs., 22 figs., 1 appendix.

  7. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  8. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  9. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  10. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  11. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  12. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  13. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  14. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  15. Short-Term Memory Coding in Children With Intellectual Disabilities

    OpenAIRE

    Henry, L.; Conners, F.

    2008-01-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these gr...

  16. Variability Bugs:

    DEFF Research Database (Denmark)

    Melo, Jean

    . Although many researchers suggest that preprocessor-based variability amplifies maintenance problems, there is little to no hard evidence on how actually variability affects programs and programmers. Specifically, how does variability affect programmers during maintenance tasks (bug finding in particular......)? How much harder is it to debug a program as variability increases? How do developers debug programs with variability? In what ways does variability affect bugs? In this Ph.D. thesis, I set off to address such issues through different perspectives using empirical research (based on controlled...... experiments) in order to understand quantitatively and qualitatively the impact of variability on programmers at bug finding and on buggy programs. From the program (and bug) perspective, the results show that variability is ubiquitous. There appears to be no specific nature of variability bugs that could...

  17. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  18. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  19. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  20. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  1. Performance Analysis of an Optical CDMA MAC Protocol With Variable-Size Sliding Window

    Science.gov (United States)

    Mohamed, Mohamed Aly A.; Shalaby, Hossam M. H.; Abdel-Moety El-Badawy, El-Sayed

    2006-10-01

    A media access control protocol for optical code-division multiple-access packet networks with variable length data traffic is proposed. This protocol exhibits a sliding window with variable size. A model for interference-level fluctuation and an accurate analysis for channel usage are presented. Both multiple-access interference (MAI) and photodetector's shot noise are considered. Both chip-level and correlation receivers are adopted. The system performance is evaluated using a traditional average system throughput and average delay. Finally, in order to enhance the overall performance, error control codes (ECCs) are applied. The results indicate that the performance can be enhanced to reach its peak using the ECC with an optimum number of correctable errors. Furthermore, chip-level receivers are shown to give much higher performance than that of correlation receivers. Also, it has been shown that MAI is the main source of signal degradation.

  2. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  3. Stochastic geometry in PRIZMA code

    International Nuclear Information System (INIS)

    Malyshkin, G. N.; Kashaeva, E. A.; Mukhamadiev, R. F.

    2007-01-01

    The paper describes a method used to simulate radiation transport through random media - randomly placed grains in a matrix material. The method models the medium consequently from one grain crossed by particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes. Sort and size distributions of crossed grains were obtained and an algorithm was developed for sampling grain orientations and positions. Special consideration was given to medium modeling at the boundary of the stochastic region. The method was implemented in the universal 3D Monte Carlo code PRIZMA. The paper provides calculated results for a model problem where we determine volume fractions of modeled components crossed by particle trajectories. It also demonstrates the use of biased sampling techniques implemented in PRIZMA for solving a problem of deep penetration in model random media. Described are calculations for the spectral response of a capacitor dose detector whose anode was modeled with account for its stochastic structure. (authors)

  4. Length of a Hanging Cable

    Directory of Open Access Journals (Sweden)

    Eric Costello

    2011-01-01

    Full Text Available The shape of a cable hanging under its own weight and uniform horizontal tension between two power poles is a catenary. The catenary is a curve which has an equation defined by a hyperbolic cosine function and a scaling factor. The scaling factor for power cables hanging under their own weight is equal to the horizontal tension on the cable divided by the weight of the cable. Both of these values are unknown for this problem. Newton's method was used to approximate the scaling factor and the arc length function to determine the length of the cable. A script was written using the Python programming language in order to quickly perform several iterations of Newton's method to get a good approximation for the scaling factor.

  5. Minimal Length, Measurability and Gravity

    Directory of Open Access Journals (Sweden)

    Alexander Shalyt-Margolin

    2016-03-01

    Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

  6. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  7. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  8. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  9. πK-scattering lengths

    International Nuclear Information System (INIS)

    Volkov, M.K.; Osipov, A.A.

    1983-01-01

    The msub(π)asub(0)sup(1/2)=0.1, msub(π)asub(0)sup(3/2)=-0.1, msub(π)asub(0)sup((-))=0.07, msub(π)sup(3)asub(1)sup(1/2)=0.018, msub(π)sup(3)asub(1)aup(3/2)=0.002, msub(π)sup(3)asub(1)sup((-))=0.0044, msub(π)sup(5)asub(2)sup(1/2)=2.4x10sup(-4) and msub(π)sup(5)asub(2)sup(3/2)=-1.2x10sup(-4) scattering lengths are calculated in the framework of the composite meson model which is based on four-quark interaction. The decay form factors of (rho, epsilon, S*) → 2π, (K tilde, K*) → Kπ are used. The q 2 -terms of the quark box diagrams are taken into account. It is shown that the q 2 -terms of the box diagrams give the main contribution to the s-wave scattering lengths. The diagrams with the intermediate vector mesons begin to play the essential role at calculation of the p- and d-wave scattering lengths

  10. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  11. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  12. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  13. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  14. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  15. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  16. Sperm length, sperm storage and mating system characteristics in bumblebees

    DEFF Research Database (Denmark)

    Baer, Boris; Schmid-Hempel, Paul; Høeg, Jens Thorvald

    2003-01-01

    -term storage of sperm, using three bumblebee species with different mating systems as models. We show that individual males produce only one size-class of sperm, but that sperm length is highly variable among brothers, among unrelated conspecific males, and among males of different species. Males of Bombus...

  17. An axisymmetric gravitational collapse code

    Energy Technology Data Exchange (ETDEWEB)

    Choptuik, Matthew W [CIAR Cosmology and Gravity Program, Department of Physics and Astronomy, University of British Columbia, Vancouver BC, V6T 1Z1 (Canada); Hirschmann, Eric W [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84604 (United States); Liebling, Steven L [Southampton College, Long Island University, Southampton, NY 11968 (United States); Pretorius, Frans [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)

    2003-05-07

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations.

  18. An axisymmetric gravitational collapse code

    International Nuclear Information System (INIS)

    Choptuik, Matthew W; Hirschmann, Eric W; Liebling, Steven L; Pretorius, Frans

    2003-01-01

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations

  19. Performance Analysis of New Binary User Codes for DS-CDMA Communication

    Science.gov (United States)

    Usha, Kamle; Jaya Sankar, Kottareddygari

    2016-03-01

    This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.

  20. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  1. Constructing snake-in-the-box codes and families of such codes covering the hypercube

    NARCIS (Netherlands)

    Haryanto, L.

    2007-01-01

    A snake-in-the-box code (or snake) is a list of binary words of length n such that each word differs from its successor in the list in precisely one bit position. Moreover, any two words in the list differ in at least two positions, unless they are neighbours in the list. The list is considered to

  2. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  3. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  4. New Channel Coding Methods for Satellite Communication

    Directory of Open Access Journals (Sweden)

    J. Sebesta

    2010-04-01

    Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.

  5. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  6. Design and performance analysis for several new classes of codes for optical synchronous CDMA and for arbitrary-medium time-hopping synchronous CDMA communication systems

    Science.gov (United States)

    Kostic, Zoran; Titlebaum, Edward L.

    1994-08-01

    New families of spread-spectrum codes are constructed, that are applicable to optical synchronous code-division multiple-access (CDMA) communications as well as to arbitrary-medium time-hopping synchronous CDMA communications. Proposed constructions are based on the mappings from integer sequences into binary sequences. We use the concept of number theoretic quadratic congruences and a subset of Reed-Solomon codes similar to the one utilized in the Welch-Costas frequency-hop (FH) patterns. The properties of the codes are as good as or better than the properties of existing codes for synchronous CDMA communications: Both the number of code-sequences within a single code family and the number of code families with good properties are significantly increased when compared to the known code designs. Possible applications are presented. To evaluate the performance of the proposed codes, a new class of hit arrays called cyclical hit arrays is recalled, which give insight into the previously unknown properties of the few classes of number theoretic FH patterns. Cyclical hit arrays and the proposed mappings are used to determine the exact probability distribution functions of random variables that represent interference between users of a time-hopping or optical CDMA system. Expressions for the bit error probability in multi-user CDMA systems are derived as a function of the number of simultaneous CDMA system users, the length of signature sequences and the threshold of a matched filter detector. The performance results are compared with the results for some previously known codes.

  7. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  8. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  9. On the equivalence of cyclic and quasi-cyclic codes over finite fields

    Directory of Open Access Journals (Sweden)

    Kenza Guenda

    2017-07-01

    Full Text Available This paper studies the equivalence problem for cyclic codes of length $p^r$ and quasi-cyclic codes of length $p^rl$. In particular, we generalize the results of Huffman, Job, and Pless (J. Combin. Theory. A, 62, 183--215, 1993, who considered the special case $p^2$. This is achieved by explicitly giving the permutations by which two cyclic codes of prime power length are equivalent. This allows us to obtain an algorithm which solves the problem of equivalency for cyclic codes of length $p^r$ in polynomial time. Further, we characterize the set by which two quasi-cyclic codes of length $p^rl$ can be equivalent, and prove that the affine group is one of its subsets.

  10. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  11. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  12. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  13. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  14. Short-term memory coding in children with intellectual disabilities.

    Science.gov (United States)

    Henry, Lucy

    2008-05-01

    To examine visual and verbal coding strategies, I asked children with intellectual disabilities and peers matched for MA and CA to perform picture memory span tasks with phonologically similar, visually similar, long, or nonsimilar named items. The CA group showed effects consistent with advanced verbal memory coding (phonological similarity and word length effects). Neither the intellectual disabilities nor MA groups showed evidence for memory coding strategies. However, children in these groups with MAs above 6 years showed significant visual similarity and word length effects, broadly consistent with an intermediate stage of dual visual and verbal coding. These results suggest that developmental progressions in memory coding strategies are independent of intellectual disabilities status and consistent with MA.

  15. FEMAXI-III, a computer code for fuel rod performance analysis

    International Nuclear Information System (INIS)

    Ito, K.; Iwano, Y.; Ichikawa, M.; Okubo, T.

    1983-01-01

    This paper presents a method of fuel rod thermal-mechanical performance analysis used in the FEMAXI-III code. The code incorporates the models describing thermal-mechanical processes such as pellet-cladding thermal expansion, pellet irradiation swelling, densification, relocation and fission gas release as they affect pellet-cladding gap thermal conductance. The code performs the thermal behavior analysis of a full-length fuel rod within the framework of one-dimensional multi-zone modeling. The mechanical effects including ridge deformation is rigorously analyzed by applying the axisymmetric finite element method. The finite element geometrical model is confined to a half-pellet-height region with the assumption that pellet-pellet interaction is symmetrical. The 8-node quadratic isoparametric ring elements are adopted for obtaining accurate finite element solutions. The Newton-Raphson iteration with an implicit algorithm is applied to perform the analysis of non-linear material behaviors accurately and stably. The pellet-cladding interaction mechanism is exactly treated using the nodal continuity conditions. The code is applicable to the thermal-mechanical analysis of water reactor fuel rods experiencing variable power histories. (orig.)

  16. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  18. Distinct timescales of population coding across cortex.

    Science.gov (United States)

    Runyan, Caroline A; Piasini, Eugenio; Panzeri, Stefano; Harvey, Christopher D

    2017-08-03

    and that coupling is a variable property of cortical populations that affects the timescale of information coding and the accuracy of behaviour.

  19. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  20. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  1. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  2. Fitting of two and three variant polynomials from experimental data through the least squares method. (Using of the codes AJUS-2D, AJUS-3D and LEGENDRE-2D); Ajuste de polinomios en dos y tres variables independientes por el metodo de minimos cuadrados. (Desarrollo de los codigos AJUS-2D, AJUS-3D y LEGENDRE-2D)

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Miro, J J; Sanz Martin, J C

    1994-07-01

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries. (Author) 10 refs.

  3. Pulsating variables

    International Nuclear Information System (INIS)

    1989-01-01

    The study of stellar pulsations is a major route to the understanding of stellar structure and evolution. At the South African Astronomical Observatory (SAAO) the following stellar pulsation studies were undertaken: rapidly oscillating Ap stars; solar-like oscillations in stars; 8-Scuti type variability in a classical Am star; Beta Cephei variables; a pulsating white dwarf and its companion; RR Lyrae variables and galactic Cepheids. 4 figs

  4. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  5. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  6. Paracantor: A two group, two region reactor code

    Energy Technology Data Exchange (ETDEWEB)

    Stone, Stuart

    1956-07-01

    Paracantor I a two energy group, two region, time independent reactor code, which obtains a closed solution for a critical reactor assembly. The code deals with cylindrical reactors of finite length and with a radial reflector of finite thickness. It is programmed for the 1.B.M: Magnetic Drum Data-Processing Machine, Type 650. The limited memory space available does not permit a flux solution to be included in the basic Paracantor code. A supplementary code, Paracantor 11, has been programmed which computes fluxes, .including adjoint fluxes, from the .output of Paracamtor I.

  7. Low Complexity List Decoding for Polar Codes with Multiple CRC Codes

    Directory of Open Access Journals (Sweden)

    Jong-Hwan Kim

    2017-04-01

    Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.

  8. The PLTEMP V2.1 code

    International Nuclear Information System (INIS)

    Olson, A.P.

    2003-01-01

    Recent improvements to the computer code PLTEMP/ANL V2.1 are described. A new iterative, error-minimization solution technique is used to obtain the thermal distribution both within each fuel plate, and along the axial length of each coolant channel. A new, radial geometry solution is available for tube-type fuel assemblies. Software comparisons of these and other new models are described. Applications to Russian-designed IRT-type research reactors are described. (author)

  9. Quantum quasi-cyclic low-density parity-check error-correcting codes

    International Nuclear Information System (INIS)

    Yuan, Li; Gui-Hua, Zeng; Lee, Moon Ho

    2009-01-01

    In this paper, we propose the approach of employing circulant permutation matrices to construct quantum quasicyclic (QC) low-density parity-check (LDPC) codes. Using the proposed approach one may construct some new quantum codes with various lengths and rates of no cycles-length 4 in their Tanner graphs. In addition, these constructed codes have the advantages of simple implementation and low-complexity encoding. Finally, the decoding approach for the proposed quantum QC LDPC is investigated. (general)

  10. [Myopia: frequency of lattice degeneration and axial length].

    Science.gov (United States)

    Martín Sánchez, M D; Roldán Pallarés, M

    2001-05-01

    To evaluate the relationship between lattice retinal degeneration and axial length of the eye in different grades of myopia. A sample of 200 eyes from 124 myopic patients was collected by chance. The average age was 34.8 years (20-50 years) and the myopia was between 0.5 and 20 diopters (D). The eyes were grouped according to the degree of refraction defect, the mean axial length of each group (Scan A) and the frequency of lattice retinal degeneration and the relationship between these variables was studied. The possible influence of age on our results was also considered. For the statistical analysis, the SAS 6.07 program with the variance analysis for quantitative variables, and chi(2) test for qualitative variables with a 5% significance were used. A multivariable linear regression model was also adjusted. The highest frequency of lattice retinal degeneration occurred in those myopia patients having more than 15 D, and also in the group of myopia patients between 3 and 6 D, but this did not show statistical significance when compared with the other myopic groups. If the axial length is assessed, a greater frequency of lattice retinal degeneration is also found when the axial length is 25-27 mm and 29-30 mm, which correspond, respectively, to myopias between 3-10 D and more than 15 D. When the multivariable linear regression model was adjusted, the axial length showed the existence of lattice retinal degeneration (beta 0.41 mm; p=0.08) adjusted by the number of diopters (beta 0.38 mm; plattice retinal degeneration was found for myopias with axial eye length between 29-30 mm (more than 15 D), and 25-27 mm (between 3-10 D).

  11. Investigation of Navier-Stokes Code Verification and Design Optimization

    Science.gov (United States)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  12. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  13. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  14. The amendment of the Labour Code

    Directory of Open Access Journals (Sweden)

    Jana Mervartová

    2012-01-01

    Full Text Available The amendment of the Labour Code, No. 365/2011 Coll., effective as from 1st January 2012, brings some of fundamental changes in labour law. The amendment regulates relation between the Labour Code and the Civil Code; and is also formulates principles of labour law relations newly. The basic period by fixed-term contract of employment is extended and also frequency its conclusion is limited. The length of trial period and the amount of redundancy payment are graduated. An earlier legislative regulation which an employee is temporarily assign to work for different employer has been returned. The number of hours by agreement to perform work is increased. The monetary compensation by competitive clause is reduced. The other changes are realised in part of collective labour law. The authoress of article notifies of the most important changes. She compares new changes of the Labour Code and former legal system and she also evaluates their advantages and disadvantages. The main objective of changes ensures labour law relations to be more flexible. And it should motivate creation of new jobs opening by employers. Amended provisions are aimed to reduction expenses of employers under the reform of the public finances. Also changes are expected in the Labour Code in connection with the further new Civil Code.

  15. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  16. Cognitive Variability

    Science.gov (United States)

    Siegler, Robert S.

    2007-01-01

    Children's thinking is highly variable at every level of analysis, from neural and associative levels to the level of strategies, theories, and other aspects of high-level cognition. This variability exists within people as well as between them; individual children often rely on different strategies or representations on closely related problems…

  17. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  18. Code of Ethics

    Science.gov (United States)

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  19. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  20. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  1. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  2. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  3. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  4. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  5. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  6. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  7. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  8. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  9. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  10. New features in the design code TLIE

    International Nuclear Information System (INIS)

    van Zeijts, J.

    1993-01-01

    We present features recently installed in the arbitrary-order accelerator design code TLIE. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes

  11. Midupper arm circumference and weight-for-length z scores have different associations with body composition

    DEFF Research Database (Denmark)

    Grijalva-Eternod, Carlos S; Wells, Jonathan Ck; Girma, Tsinuel

    2015-01-01

    understood. OBJECTIVE: We investigated the association between these 2 anthropometric indexes and body composition to help understand why they identify different children as wasted. DESIGN: We analyzed weight, length, MUAC, fat-mass (FM), and fat-free mass (FFM) data from 2470 measurements from 595 healthy...... Ethiopian infants obtained at birth and at 1.5, 2.5, 3.5, 4.5, and 6 mo of age. We derived WLZs by using 2006 WHO growth standards. We derived length-adjusted FM and FFM values as unexplained residuals after regressing each FM and FFM against length. We used a correlation analysis to assess associations...... between length, FFM, and FM (adjusted and nonadjusted for length) and the MUAC and WLZ and a multivariable regression analysis to assess the independent variability of length and length-adjusted FM and FFM with either the MUAC or the WLZ as the outcome. RESULTS: At all ages, length showed consistently...

  12. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  13. Application of Displacement Height and Surface Roughness Length to Determination Boundary Layer Development Length over Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Xiangju Cheng

    2014-12-01

    Full Text Available One of the most uncertain parameters in stepped spillway design is the length (from the crest of boundary layer development. The normal velocity profiles responding to the steps as bed roughness are investigated in the developing non-aerated flow region. A detailed analysis of the logarithmic vertical velocity profiles on stepped spillways is conducted through experimental data to verify the computational code and numerical experiments to expand the data available. To determine development length, the hydraulic roughness and displacement thickness, along with the shear velocity, are needed. This includes determining displacement height d and surface roughness length z0 and the relationship of d and z0 to the step geometry. The results show that the hydraulic roughness height ks is the primary factor on which d and z0 depend. In different step height, step width, discharge and intake Froude number, the relations d/ks = 0.22–0.27, z0/ks = 0.06–0.1 and d/z0 = 2.2–4 result in a good estimate. Using the computational code and numerical experiments, air inception will occur over stepped spillway flow as long as the Bauer-defined boundary layer thickness is between 0.72 and 0.79.

  14. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  15. Computer Security: is your code sane?

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane?   Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...

  16. MARS CODE MANUAL VOLUME III - Programmer's Manual

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Hwang, Moon Kyu; Jeong, Jae Jun; Kim, Kyung Doo; Bae, Sung Won; Lee, Young Jin; Lee, Won Jae

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This programmer's manual provides a complete list of overall information of code structure and input/output function of MARS. In addition, brief descriptions for each subroutine and major variables used in MARS are also included in this report, so that this report would be very useful for the code maintenance. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  17. Telomere length in normal and neoplastic canine tissues.

    Science.gov (United States)

    Cadile, Casey D; Kitchell, Barbara E; Newman, Rebecca G; Biller, Barbara J; Hetler, Elizabeth R

    2007-12-01

    To determine the mean telomere restriction fragment (TRF) length in normal and neoplastic canine tissues. 57 solid-tissue tumor specimens collected from client-owned dogs, 40 samples of normal tissue collected from 12 clinically normal dogs, and blood samples collected from 4 healthy blood donor dogs. Tumor specimens were collected from client-owned dogs during diagnostic or therapeutic procedures at the University of Illinois Veterinary Medical Teaching Hospital, whereas 40 normal tissue samples were collected from 12 control dogs. Telomere restriction fragment length was determined by use of an assay kit. A histologic diagnosis was provided for each tumor by personnel at the Veterinary Diagnostic Laboratory at the University of Illinois. Mean of the mean TRF length for 44 normal samples was 19.0 kilobases (kb; range, 15.4 to 21.4 kb), and the mean of the mean TRF length for 57 malignant tumors was 19.0 kb (range, 12.9 to 23.5 kb). Although the mean of the mean TRF length for tumors and normal tissues was identical, tumor samples had more variability in TRF length. Telomerase, which represents the main mechanism by which cancer cells achieve immortality, is an attractive therapeutic target. The ability to measure telomere length is crucial to monitoring the efficacy of telomerase inhibition. In contrast to many other mammalian species, the length of canine telomeres and the rate of telomeric DNA loss are similar to those reported in humans, making dogs a compelling choice for use in the study of human anti-telomerase strategies.

  18. In vivo myograph measurement of muscle contraction at optimal length

    Directory of Open Access Journals (Sweden)

    Ahmed Aminul

    2007-01-01

    Full Text Available Abstract Background Current devices for measuring muscle contraction in vivo have limited accuracy in establishing and re-establishing the optimum muscle length. They are variable in the reproducibility to determine the muscle contraction at this length, and often do not maintain precise conditions during the examination. Consequently, for clinical testing only semi-quantitative methods have been used. Methods We present a newly developed myograph, an accurate measuring device for muscle contraction, consisting of three elements. Firstly, an element for adjusting the axle of the device and the physiological axis of muscle contraction; secondly, an element to accurately position and reposition the extremity of the muscle; and thirdly, an element for the progressive pre-stretching and isometric locking of the target muscle. Thus it is possible to examine individual in vivo muscles in every pre-stretched, specified position, to maintain constant muscle-length conditions, and to accurately re-establish the conditions of the measurement process at later sessions. Results In a sequence of experiments the force of contraction of the muscle at differing stretching lengths were recorded and the forces determined. The optimum muscle length for maximal force of contraction was established. In a following sequence of experiments with smaller graduations around this optimal stretching length an increasingly accurate optimum muscle length for maximal force of contraction was determined. This optimum length was also accurately re-established at later sessions. Conclusion We have introduced a new technical solution for valid, reproducible in vivo force measurements on every possible point of the stretching curve. Thus it should be possible to study the muscle contraction in vivo to the same level of accuracy as is achieved in tests with in vitro organ preparations.

  19. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  20. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  1. Pulse length assessment of compact ignition tokamak designs

    International Nuclear Information System (INIS)

    Stotler, D.P.; Pomphrey, N.

    1989-07-01

    A time-dependent zero-dimensional code has been developed to assess the pulse length and auxiliary heating requirements of Compact Ignition Tokamak (CIT) designs. By taking a global approach to the calculation, parametric studies can be easily performed. The accuracy of the procedure is tested by comparing with the Tokamak Simulation Code which uses theory-based thermal diffusivities. A series of runs is carried out at various levels of energy confinement for each of three possible CIT configurations. It is found that for cases of interest, ignition or an energy multiplication factor Q /approxreverse arrowgt/ 7 can be attained within the first half of the planned five-second flattop with 10--40 MW of auxiliary heating. These results are supported by analytic calculations. 18 refs., 7 figs., 2 tabs

  2. Variable & Recode Definitions - SEER Documentation

    Science.gov (United States)

    Resources that define variables and provide documentation for reporting using SEER and related datasets. Choose from SEER coding and staging manuals plus instructions for recoding behavior, site, stage, cause of death, insurance, and several additional topics. Also guidance on months survived, calculating Hispanic mortality, and site-specific surgery.

  3. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  4. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  5. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  6. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  7. RADTRAN II: revised computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1982-10-01

    A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables

  8. Otolith Length-Fish Length Relationships of Eleven US Arctic Fish Species and Their Application to Ice Seal Diet Studies

    Science.gov (United States)

    Walker, K. L.; Norcross, B.

    2016-02-01

    The Arctic ecosystem has moved into the spotlight of scientific research in recent years due to increased climate change and oil and gas exploration. Arctic fishes and Arctic marine mammals represent key parts of this ecosystem, with fish being a common part of ice seal diets in the Arctic. Determining sizes of fish consumed by ice seals is difficult because otoliths are often the only part left of the fish after digestion. Otolith length is known to be positively related to fish length. By developing species-specific otolith-body morphometric relationships for Arctic marine fishes, fish length can be determined for fish prey found in seal stomachs. Fish were collected during ice free months in the Beaufort and Chukchi seas 2009 - 2014, and the most prevalent species captured were chosen for analysis. Otoliths from eleven fish species from seven families were measured. All species had strong linear relationships between otolith length and fish total length. Nine species had coefficient of determination values over 0.75, indicating that most of the variability in the otolith to fish length relationship was explained by the linear regression. These relationships will be applied to otoliths found in stomachs of three species of ice seals (spotted Phoca largha, ringed Pusa hispida, and bearded Erignathus barbatus) and used to estimate fish total length at time of consumption. Fish lengths can in turn be used to calculate fish weight, enabling further investigation into ice seal energetic demands. This application will aid in understanding how ice seals interact with fish communities in the US Arctic and directly contribute to diet comparisons among and within ice seal species. A better understanding of predator-prey interactions in the US Arctic will aid in predicting how ice seal and fish species will adapt to a changing Arctic.

  9. Utility of telomere length measurements for age determination of humpback whales

    Directory of Open Access Journals (Sweden)

    Morten Tange Olsen

    2014-12-01

    Full Text Available This study examines the applicability of telomere length measurements by quantitative PCR as a tool for minimally invasive age determination of free-ranging cetaceans. We analysed telomere length in skin samples from 28 North Atlantic humpback whales (Megaptera novaeangliae, ranging from 0 to 26 years of age. The results suggested a significant correlation between telomere length and age in humpback whales. However, telomere length was highly variable among individuals of similar age, suggesting that telomere length measured by quantitative PCR is an imprecise determinant of age in humpback whales. The observed variation in individual telomere length was found to be a function of both experimental and biological variability, with the latter perhaps reflecting patterns of inheritance, resource allocation trade-offs, and stochasticity of the marine environment.

  10. Short initial length quench on CICC of ITER TF coils

    International Nuclear Information System (INIS)

    Nicollet, S.; Ciazynski, D.; Duchateau, J.-L.; Lacroix, B.; Bessette, D.; Rodriguez-Mateos, F.; Coatanea-Gouachet, M.; Gauthier, F.

    2014-01-01

    Previous quench studies performed for the International Thermonuclear Experimental Reactor (ITER) Toroidal Field (TF) Coils have led to identify two extreme families of quench: first 'severe' quenches over long initial lengths in high magnetic field, and second smooth quenches over short initial lengths in low field region. Detailed analyses and results on smooth quench propagation and detectability on one TF Cable In Conduit Conductor (CICC) with a lower propagation velocity are presented here. The influence of the initial quench energy is shown and results of computations with either a Fast Discharge (FD) of the magnet or without (failure of the voltage quench detection system) are reported. The influence of the central spiral of the conductor on the propagation velocity is also detailed. In the cases of a regularly triggered FD, the hot spot temperature criterion of 150 K (with helium and jacket) is fulfilled for an initial quench length of 1 m, whereas this criterion is exceed (Tmax ≈ 200 K) for an extremely short length of 5 cm. These analyses were carried out using both the Supermagnet(trade mark, serif) and Venecia codes and the comparisons of the results are also discussed

  11. Short initial length quench on CICC of ITER TF coils

    Energy Technology Data Exchange (ETDEWEB)

    Nicollet, S.; Ciazynski, D.; Duchateau, J.-L.; Lacroix, B. [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Bessette, D.; Rodriguez-Mateos, F. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Coatanea-Gouachet, M. [ELC Engineering, 350 chemin du Verladet, F-13290 Les Milles (France); Gauthier, F. [Soditech Ingenierie, 4 bis allée des Gabians, ZI La Frayère, 06150 Cannes (France)

    2014-01-29

    Previous quench studies performed for the International Thermonuclear Experimental Reactor (ITER) Toroidal Field (TF) Coils have led to identify two extreme families of quench: first 'severe' quenches over long initial lengths in high magnetic field, and second smooth quenches over short initial lengths in low field region. Detailed analyses and results on smooth quench propagation and detectability on one TF Cable In Conduit Conductor (CICC) with a lower propagation velocity are presented here. The influence of the initial quench energy is shown and results of computations with either a Fast Discharge (FD) of the magnet or without (failure of the voltage quench detection system) are reported. The influence of the central spiral of the conductor on the propagation velocity is also detailed. In the cases of a regularly triggered FD, the hot spot temperature criterion of 150 K (with helium and jacket) is fulfilled for an initial quench length of 1 m, whereas this criterion is exceed (Tmax ≈ 200 K) for an extremely short length of 5 cm. These analyses were carried out using both the Supermagnet(trade mark, serif) and Venecia codes and the comparisons of the results are also discussed.

  12. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    Science.gov (United States)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  13. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  14. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  15. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  16. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  17. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  18. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  19. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  20. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  1. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  2. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  3. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  4. Leukocyte Telomere Length and Cognitive Function in Older Adults

    Directory of Open Access Journals (Sweden)

    Emily Frith

    2018-04-01

    Full Text Available We evaluated the specific association between leukocyte telomere length and cognitive function among a national sample of the broader U.S. older adult population. Data from the 1999-2002 National Health and Nutrition Examination Survey (NHANES were used to identify 1,722 adults, between 60-85 years, with complete data on selected study variables. DNA was extracted from whole blood via the LTL assay, which is administered using quantitative polymerase chain reaction to measure telomere length relative to standard reference DNA (T/S ratio. Average telomere length was recorded, with two to three assays performed to control for individual variability. The DSST (Digit Symbol Substitution Test was used to assess participant executive cognitive functioning tasks of pairing and free recall. Individuals were excluded if they had been diagnosed with coronary artery disease, congestive heart failure, heart attack or stroke at the baseline assessment. Leukocyte telomere length was associated with higher cognitive performance, independent of gender, race-ethnicity, physical activity status, body mass index and other covariates. In this sample, there was a strong association between LTL and cognition; for every 1 T/S ratio increase in LTL, there was a corresponding 9.9 unit increase in the DSST (β = 9.9; 95% CI: 5.6-14.2; P [JCBPR 2018; 7(1.000: 14-18

  5. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  6. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  7. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  8. Cracking the Gender Codes

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  9. The impact of precise robotic lesion length measurement on stent length selection: ramifications for stent savings.

    Science.gov (United States)

    Campbell, Paul T; Kruse, Kevin R; Kroll, Christopher R; Patterson, Janet Y; Esposito, Michele J

    2015-09-01

    Coronary stent deployment outcomes can be negatively impacted by inaccurate lesion measurement and inappropriate stent length selection (SLS). We compared visual estimate of these parameters to those provided by the CorPath 200® Robotic PCI System. Sixty consecutive patients who underwent coronary stent placement utilizing the CorPath System were evaluated. The treating physician assessed orthogonal images and provided visual estimates of lesion length and SLS. The robotic system was then used for the same measures. SLS was considered to be accurate when visual estimate and robotic measures were in agreement. Visual estimate SLSs were considered to be "short" or "long" if they were below or above the robotic-selected stents, respectively. Only 35% (21/60) of visually estimated lesions resulted in accurate SLS, whereas 33% (20/60) and 32% (19/60) of the visually estimated SLSs were long and short, respectively. In 5 cases (8.3%), 1 less stent was placed based on the robotic lesion measurement being shorter than the visual estimate. Visual estimate assessment of lesion length and SLS is highly variable with 65% of the cases being inaccurately measured when compared to objective measures obtained from the robotic system. The 32% of the cases where lesions were visually estimated to be short represents cases that often require the use of extra stents after the full lesion is not covered by 1 stent [longitudinal geographic miss (LGM)]. Further, these data showed that the use of the robotic system prevented the use of extra stents in 8.3% of the cases. Measurement of lesions with robotic PCI may reduce measurement errors, need for extra stents, and LGM. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  11. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  12. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  13. Variable-Period Undulators for Synchrotron Radiation

    Energy Technology Data Exchange (ETDEWEB)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  14. Variable-Period Undulators For Synchrotron Radiation

    Science.gov (United States)

    Shenoy, Gopal; Lewellen, John; Shu, Deming; Vinokurov, Nikolai

    2005-02-22

    A new and improved undulator design is provided that enables a variable period length for the production of synchrotron radiation from both medium-energy and high-energy storage rings. The variable period length is achieved using a staggered array of pole pieces made up of high permeability material, permanent magnet material, or an electromagnetic structure. The pole pieces are separated by a variable width space. The sum of the variable width space and the pole width would therefore define the period of the undulator. Features and advantages of the invention include broad photon energy tunability, constant power operation and constant brilliance operation.

  15. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  16. The Use of Color-Coded Genograms in Family Therapy.

    Science.gov (United States)

    Lewis, Karen Gail

    1989-01-01

    Describes a variable color-coding system which has been added to the standard family genogram in which characteristics or issues associated with a particular presenting problem or for a particular family are arbitrarily assigned a color. Presents advantages of color-coding, followed by clinical examples. (Author/ABL)

  17. Coding, Organization and Feedback Variables in Motor Skills.

    Science.gov (United States)

    1982-04-01

    rcsults from a same-diffcrcn task insolsing j phvsical (i.e.. AA and name (i.e.. Aa) matches. Belier (Ig71 pnsulated t%,o separate effects of priming. The...data revealed facilitory effects in matching for bth primed ph)sical and natie matches. Belier attributed the ciTects of ph~sical matches to stimulus...matches. thes 3 w necessarily rel supon.nrormation stored in long-term memor. and Belier artgued that adance information activated the stimulus

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  19. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    , a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.

  20. Variable Permanent Magnet Quadrupole

    International Nuclear Information System (INIS)

    Mihara, T.; Iwashita, Y.; Kyoto U.; Kumada, M.; NIRS, Chiba; Spencer, C.M.; SLAC

    2007-01-01

    A permanent magnet quadrupole (PMQ) is one of the candidates for the final focus lens in a linear collider. An over 120 T/m strong variable permanent magnet quadrupole is achieved by the introduction of saturated iron and a 'double ring structure'. A fabricated PMQ achieved 24 T integrated gradient with 20 mm bore diameter, 100 mm magnet diameter and 20 cm pole length. The strength of the PMQ is adjustable in 1.4 T steps, due to its 'double ring structure': the PMQ is split into two nested rings; the outer ring is sliced along the beam line into four parts and is rotated to change the strength. This paper describes the variable PMQ from fabrication to recent adjustments

  1. Sensitivity analysis of FRAPCON-1 computer code to some parameters

    International Nuclear Information System (INIS)

    Chia, C.T.; Silva, C.F. da.

    1987-05-01

    A sensibility study of the code FRAPCON-1 was done for the following inout data: number of axial nodes, number of time steps and the axial power shape. Their influence in the code response concerning to the fuel center line temperature, stored energy, internal gas pressure, clad hoop strain and gap width were analyzed. The number of axial nodes has little influence, but care must be taken in the choice of the power axial profile and the time step length. (Author) [pt

  2. [Renal length measured by ultrasound in adult mexican population].

    Science.gov (United States)

    Oyuela-Carrasco, J; Rodríguez-Castellanos, F; Kimura, E; Delgado-Hernández, R; Herrera-Félix, J P

    2009-01-01

    Renal length estimation by ultrasound is an important parameter in clinical evaluation of kidney disease and healthy donors. Changes in renal volume may be a sign of kidney disease. Correct interpretation of renal length requires the knowledge of normal limits, these have not been described for Latin American population. To describe normal renal length (RL) by ultrasonography in a group of Mexican adults. Ultrasound measure of RL in 153 healthy Mexican adults stratified by age. Describe the association of RL to several anthropometric variables. A total of 77 males and 76 females were scanner. The average age for the group was 44.12 +/- 15.44 years. The mean weight, body mass index (BMI) and height were 68.87 +/- 11.69 Kg, 26.77 +/- 3.82 kg/m2 and 160 +/- 8.62 cm respectively. Dividing the population by gender, showed a height of 166 +/- 6.15 cm for males and 154.7 +/- 5.97 cm for females (p =0.000). Left renal length (LRL) in the whole group was 105.8 +/- 7.56 mm and right renal length (RRL) was 104.3 +/- 6.45 mm (p = 0.000.) The LRL for males was 107.16 +/- 6.97 mm and for females was 104.6 +/- 7.96 mm. The average RRL for males was 105.74 +/- 5.74 mm and for females 102.99 +/- 6.85 mm (p = 0.008.) We noted that RL decreased with age and the rate of decline accelerates alter 60 years of age. Both lengths correlated significantly and positively with weight, BMI and height. The RL was significantly larger in males than in females in both kidneys (p = 0.036) in this Mexican population. Renal length declines after 60 years of age and specially after 70 years.

  3. Complex variables

    CERN Document Server

    Fisher, Stephen D

    1999-01-01

    The most important topics in the theory and application of complex variables receive a thorough, coherent treatment in this introductory text. Intended for undergraduates or graduate students in science, mathematics, and engineering, this volume features hundreds of solved examples, exercises, and applications designed to foster a complete understanding of complex variables as well as an appreciation of their mathematical beauty and elegance. Prerequisites are minimal; a three-semester course in calculus will suffice to prepare students for discussions of these topics: the complex plane, basic

  4. A restructuring of CF package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, K. R.; Kim, D. H.; Cho, S. W.

    2004-01-01

    CF package, which evaluates user-specified 'control functions' and applies them to define or control various aspects of computation, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the CF package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory, difficulty is more over because its data is location information of other package's data due to characteristics of CF package. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the CF package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  5. A restructuring of RN1 package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, D. H.; Kim, K. R.

    2003-01-01

    RN1 package, which is one of two fission product-related packages in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and modernized data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the RN1 package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN1 package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The verification has been done by comparing the results of the modified code with those from the existing code. As the trends are similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  6. A restructuring of COR package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S.H.; Kim, K.R.; Kim, D.H.

    2004-01-01

    The COR package, which calculates the thermal response of the core and the lower plenum internal structures and models the relocation of the core and lower plenum structural materials, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and a modernized data structure. To do this, the data transferring methods of the current MELCOR code are modified and adopted into the COR package. The data structure of the current MELCOR code using FORTRAN77 has a difficulty in grasping the meaning of the variables as well as a waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which leads to an efficient memory treatment and an easy understanding of the code. Restructuring of the COR package addressed in this paper includes a module development, subroutine modification. The verification has been done by comparing the results of the modified code with those of the existing code. As the trends are similar to each other, it implies that the same approach could be extended to the entire code package. It is expected that the code restructuring will accelerated the code's domestication thanks to a direct understanding of each variable and an easy implementation of the modified or newly developed models. (author)

  7. A restructuring of RN2 package for MIDAS computer code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, D. H.

    2003-01-01

    RN2 package, which is one of two fission product-related package in MELCOR, has been restructured for the MIDAS computer code. MIDAS is being developed as an integrated severe accident analysis code with a user-friendly graphical user interface and data structure. To do this, data transferring methods of current MELCOR code are modified and adopted into the RN2 package. The data structure of the current MELCOR code using FORTRAN77 causes a difficult grasping of meaning of the variables as well as waste of memory. New features of FORTRAN90 make it possible to allocate the storage dynamically and to use the user-defined data type, which lead to an efficient memory treatment and an easy understanding of the code. Restructuring of the RN2 package addressed in this paper includes module development, subroutine modification, and treats MELGEN, which generates data file, as well as MELCOR, which is processing a calculation. The validation has been done by comparing the results of the modified code with those from the existing code. As the trends are the similar to each other, it hints that the same approach could be extended to the entire code package. It is expected that code restructuring will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models

  8. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W. L.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries

  9. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W.L.; Leopando, L.S.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries. (author)

  10. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  11. Variable stars

    International Nuclear Information System (INIS)

    Feast, M.W.; Wenzel, W.; Fernie, J.D.; Percy, J.R.; Smak, J.; Gascoigne, S.C.B.; Grindley, J.E.; Lovell, B.; Sawyer Hogg, H.B.; Baker, N.; Fitch, W.S.; Rosino, L.; Gursky, H.

    1976-01-01

    A critical review of variable stars is presented. A fairly complete summary of major developments and discoveries during the period 1973-1975 is given. The broad developments and new trends are outlined. Essential problems for future research are identified. (B.R.H. )

  12. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corp., Tokyo (Japan)

    2012-07-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  13. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2012-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  14. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  15. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  16. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  17. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  18. Fundamentals of the DIGES code

    Energy Technology Data Exchange (ETDEWEB)

    Simos, N.; Philippacopoulos, A.J.

    1994-08-01

    Recently the authors have completed the development of the DIGES code (Direct GEneration of Spectra) for the US Nuclear Regulatory Commission. This paper presents the fundamental theoretical aspects of the code. The basic modeling involves a representation of typical building-foundation configurations as multi degree-of-freedom dynamic which are subjected to dynamic inputs in the form of applied forces or pressure at the superstructure or in the form of ground motions. Both the deterministic as well as the probabilistic aspects of DIGES are described. Alternate ways of defining the seismic input for the estimation of in-structure spectra and their consequences in terms of realistically appraising the variability of the structural response is discussed in detaiL These include definitions of the seismic input by ground acceleration time histories, ground response spectra, Fourier amplitude spectra or power spectral densities. Conversions of one of these forms to another due to requirements imposed by certain analysis techniques have been shown to lead, in certain cases, in controversial results. Further considerations include the definition of the seismic input as the excitation which is directly applied at the foundation of a structure or as the ground motion of the site of interest at a given point. In the latter case issues related to the transferring of this motion to the foundation through convolution/deconvolution and generally through kinematic interaction approaches are considered.

  19. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  20. A PCR-based protocol to accurately size C9orf72 intermediate-length alleles.

    Science.gov (United States)

    Biasiotto, Giorgio; Archetti, Silvana; Di Lorenzo, Diego; Merola, Francesca; Paiardi, Giulia; Borroni, Barbara; Alberici, Antonella; Padovani, Alessandro; Filosto, Massimiliano; Bonvicini, Cristian; Caimi, Luigi; Zanella, Isabella

    2017-04-01

    Although large expansions of the non-coding GGGGCC repeat in C9orf72 gene are clearly defined as pathogenic for Amyotrophic Lateral Sclerosis (ALS) and Frontotemporal Lobar Degeneration (FTLD), intermediate-length expansions have also been associated with those and other neurodegenerative diseases. Intermediate-length allele sizing is complicated by intrinsic properties of current PCR-based methodologies, in that somatic mosaicism could be suspected. We designed a protocol that allows the exact sizing of intermediate-length alleles, as well as the identification of large expansions. Copyright © 2016 Elsevier Ltd. All rights reserved.