Quantum-capacity-approaching codes for the detected-jump channel
International Nuclear Information System (INIS)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng; Zeng Bei
2010-01-01
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasures and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.
Djordjevic, Ivan; Vasic, Bane
2010-01-01
This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.
Directory of Open Access Journals (Sweden)
Simoens Frederik
2006-01-01
Full Text Available This paper concerns channel tracking in a multiantenna context for correlated flat-fading channels obeying a Gauss-Markov model. It is known that data-aided tracking of fast-fading channels requires a lot of pilot symbols in order to achieve sufficient accuracy, and hence decreases the spectral efficiency. To overcome this problem, we design a code-aided estimation scheme which exploits information from both the pilot symbols and the unknown coded data symbols. The algorithm is derived based on a factor graph representation of the system and application of the sum-product algorithm. The sum-product algorithm reveals how soft information from the decoder should be exploited for the purpose of estimation and how the information bits can be detected. Simulation results illustrate the effectiveness of our approach.
Protograph LDPC Codes Over Burst Erasure Channels
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Worst configurations (instantons) for compressed sensing over reals: a channel coding approach
International Nuclear Information System (INIS)
Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane
2010-01-01
We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.
Optimal Codes for the Burst Erasure Channel
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure
LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources
Directory of Open Access Journals (Sweden)
Javier Garcia-Frias
2005-05-01
Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.
Optimal super dense coding over memory channels
Shadman, Zahra; Kampermann, Hermann; Macchiavello, Chiara; Bruß, Dagmar
2011-01-01
We study the super dense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and non-unitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The super dense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where non-unitary encoding leads to an improvement in the super dense coding capacity.
Joint source-channel coding using variable length codes
Balakirsky, V.B.
2001-01-01
We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant
Protograph LDPC Codes for the Erasure Channel
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Iterative List Decoding of Concatenated Source-Channel Codes
Directory of Open Access Journals (Sweden)
Hedayat Ahmadreza
2005-01-01
Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.
Adaptive RAC codes employing statistical channel evaluation ...
African Journals Online (AJOL)
An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Kashyap Manohar
2008-01-01
Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Chris Winstead
2008-04-01
Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Ripple Design of LT Codes for BIAWGN Channels
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2014-01-01
This paper presents a novel framework, which enables a design of rateless codes for binary input additive white Gaussian noise (BIAWGN) channels, using the ripple-based approach known from the works for the binary erasure channel (BEC). We reveal that several aspects of the analytical results from...
Bidirectional Fano Algorithm for Lattice Coded MIMO Channels
Al-Quwaiee, Hessa
2013-01-01
channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered
Channel coding techniques for wireless communications
Deergha Rao, K
2015-01-01
The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...
Joint source/channel coding of scalable video over noisy channels
Energy Technology Data Exchange (ETDEWEB)
Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)
1997-01-01
We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}
Radio frequency channel coding made easy
Faruque, Saleh
2016-01-01
This book introduces Radio Frequency Channel Coding to a broad audience. The author blends theory and practice to bring readers up-to-date in key concepts, underlying principles and practical applications of wireless communications. The presentation is designed to be easily accessible, minimizing mathematics and maximizing visuals.
New Channel Coding Methods for Satellite Communication
Directory of Open Access Journals (Sweden)
J. Sebesta
2010-04-01
Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.
LDPC Code Design for Nonuniform Power-Line Channels
Directory of Open Access Journals (Sweden)
Sanaei Ali
2007-01-01
Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.
Image content authentication based on channel coding
Zhang, Fan; Xu, Lei
2008-03-01
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.
Gallager error-correcting codes for binary asymmetric channels
International Nuclear Information System (INIS)
Neri, I; Skantzos, N S; Bollé, D
2008-01-01
We derive critical noise levels for Gallager codes on asymmetric channels as a function of the input bias and the temperature. Using a statistical mechanics approach we study the space of codewords and the entropy in the various decoding regimes. We further discuss the relation of the convergence of the message passing algorithm with the endogenous property and complexity, characterizing solutions of recursive equations of distributions for cavity fields
Channel coding in the space station data system network
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Subchannel analysis code development for CANDU fuel channel
International Nuclear Information System (INIS)
Park, J. H.; Suk, H. C.; Jun, J. S.; Oh, D. J.; Hwang, D. H.; Yoo, Y. J.
1998-07-01
Since there are several subchannel codes such as COBRA and TORC codes for a PWR fuel channel but not for a CANDU fuel channel in our country, the subchannel analysis code for a CANDU fuel channel was developed for the prediction of flow conditions on the subchannels, for the accurate assessment of the thermal margin, the effect of appendages, and radial/axial power profile of fuel bundles on flow conditions and CHF and so on. In order to develop the subchannel analysis code for a CANDU fuel channel, subchannel analysis methodology and its applicability/pertinence for a fuel channel were reviewed from the CANDU fuel channel point of view. Several thermalhydraulic and numerical models for the subchannel analysis on a CANDU fuel channel were developed. The experimental data of the CANDU fuel channel were collected, analyzed and used for validation of a subchannel analysis code developed in this work. (author). 11 refs., 3 tabs., 50 figs
Telemetry advances in data compression and channel coding
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks
Directory of Open Access Journals (Sweden)
Grasielli Barreto
2017-03-01
Full Text Available Underwater acoustic networks (UAN allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC codes and fountain codes (FC for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.
Space-Time Trellis Coded 8PSK Schemes for Rapid Rayleigh Fading Channels
Directory of Open Access Journals (Sweden)
Salam A. Zummo
2002-05-01
Full Text Available This paper presents the design of 8PSK space-time (ST trellis codes suitable for rapid fading channels. The proposed codes utilize the design criteria of ST codes over rapid fading channels. Two different approaches have been used. The first approach maximizes the symbol-wise Hamming distance (HD between signals leaving from or entering to the same encoderÃ¢Â€Â²s state. In the second approach, set partitioning based on maximizing the sum of squared Euclidean distances (SSED between the ST signals is performed; then, the branch-wise HD is maximized. The proposed codes were simulated over independent and correlated Rayleigh fading channels. Coding gains up to 4 dB have been observed over other ST trellis codes of the same complexity.
Ripple design of LT codes for AWGN channel
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
In this paper, we present an analytical framework for designing LT codes in additive white Gaussian noise (AWGN) channels. We show that some of analytical results from binary erasure channels (BEC) also hold in AWGN channels with slight modifications. This enables us to apply a ripple-based design...
Typical performance of regular low-density parity-check codes over general symmetric channels
International Nuclear Information System (INIS)
Tanaka, Toshiyuki; Saad, David
2003-01-01
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models
Typical performance of regular low-density parity-check codes over general symmetric channels
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)
2003-10-31
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Turbo coding, turbo equalisation and space-time coding for transmission over fading channels
Hanzo, L; Yeap, B
2002-01-01
Against the backdrop of the emerging 3G wireless personal communications standards and broadband access network standard proposals, this volume covers a range of coding and transmission aspects for transmission over fading wireless channels. It presents the most important classic channel coding issues and also the exciting advances of the last decade, such as turbo coding, turbo equalisation and space-time coding. It endeavours to be the first book with explicit emphasis on channel coding for transmission over wireless channels. Divided into 4 parts: Part 1 - explains the necessary background for novices. It aims to be both an easy reading text book and a deep research monograph. Part 2 - provides detailed coverage of turbo conventional and turbo block coding considering the known decoding algorithms and their performance over Gaussian as well as narrowband and wideband fading channels. Part 3 - comprehensively discusses both space-time block and space-time trellis coding for the first time in literature. Par...
A finite range coupled channel Born approximation code
International Nuclear Information System (INIS)
Nagel, P.; Koshel, R.D.
1978-01-01
The computer code OUKID calculates differential cross sections for direct transfer nuclear reactions in which multistep processes, arising from strongly coupled inelastic states in both the target and residual nuclei, are possible. The code is designed for heavy ion reactions where full finite range and recoil effects are important. Distorted wave functions for the elastic and inelastic scattering are calculated by solving sets of coupled differential equations using a Matrix Numerov integration procedure. These wave functions are then expanded into bases of spherical Bessel functions by the plane-wave expansion method. This approach allows the six-dimensional integrals for the transition amplitude to be reduced to products of two one-dimensional integrals. Thus, the inelastic scattering is treated in a coupled channel formalism while the transfer process is treated in a finite range born approximation formalism. (Auth.)
Whether and Where to Code in the Wireless Relay Channel
DEFF Research Database (Denmark)
Shi, Xiaomeng; Médard, Muriel; Roetter, Daniel Enrique Lucani
2013-01-01
The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control...... the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half......-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception...
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Directory of Open Access Journals (Sweden)
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Bilayer Protograph Codes for Half-Duplex Relay Channels
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
Performance analysis of LDPC codes on OOK terahertz wireless channels
International Nuclear Information System (INIS)
Liu Chun; Wang Chang; Cao Jun-Cheng
2016-01-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. (paper)
Directory of Open Access Journals (Sweden)
Valérian Mannoni
2004-09-01
Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called Ã‚Â“irregularity profile.Ã‚Â” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
Channel modeling, signal processing and coding for perpendicular magnetic recording
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by
Joint opportunistic scheduling and network coding for bidirectional relay channel
Shaqfeh, Mohammad
2013-07-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users\\' transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited. © 2013 IEEE.
Multi-rate control over AWGN channels via analog joint source-channel coding
Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak
2017-01-01
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.
Multi-rate control over AWGN channels via analog joint source-channel coding
Khina, Anatoly
2017-01-05
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.
Channel estimation for physical layer network coding systems
Gao, Feifei; Wang, Gongpu
2014-01-01
This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
Directory of Open Access Journals (Sweden)
Pierre Siohan
2005-05-01
Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
Energy Technology Data Exchange (ETDEWEB)
Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)
2007-10-12
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
International Nuclear Information System (INIS)
Alamino, Roberto C; Saad, David
2007-01-01
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
Directory of Open Access Journals (Sweden)
Marinkovic Slavica
2006-01-01
Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Multiple Description Coding for Closed Loop Systems over Erasure Channels
DEFF Research Database (Denmark)
Østergaard, Jan; Quevedo, Daniel
2013-01-01
In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller......) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization.We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet...... by showing that the system can be cast as a Markov jump linear system....
Directory of Open Access Journals (Sweden)
Du Bing
2010-01-01
Full Text Available A recently developed theory suggests that network coding is a generalization of source coding and channel coding and thus yields a significant performance improvement in terms of throughput and spatial diversity. This paper proposes a cooperative design of a parity-check network coding scheme in the context of a two-source multiple access relay channel (MARC model, a common compact model in hierarchical wireless sensor networks (WSNs. The scheme uses Low-Density Parity-Check (LDPC as the surrogate to build up a layered structure which encapsulates the multiple constituent LDPC codes in the source and relay nodes. Specifically, the relay node decodes the messages from two sources, which are used to generate extra parity-check bits by a random network coding procedure to fill up the rate gap between Source-Relay and Source-Destination transmissions. Then, we derived the key algebraic relationships among multidimensional LDPC constituent codes as one of the constraints for code profile optimization. These extra check bits are sent to the destination to realize a cooperative diversity as well as to approach MARC decode-and-forward (DF capacity.
Bidirectional Fano Algorithm for Lattice Coded MIMO Channels
Al-Quwaiee, Hessa
2013-05-08
Recently, lattices - a mathematical representation of infinite discrete points in the Euclidean space, have become an effective way to describe and analyze communication systems especially system those that can be modeled as linear Gaussian vector channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered as the Closest Lattice Point Search (CLPS). Since the time lattice codes were introduced to Multiple Input Multiple Output (MIMO) channel, Sphere Decoder (SD) has been an efficient way to implement lattice decoders. Sphere decoder offers the optimal performance at the expense of high decoding complexity especially for low signal-to-noise ratios (SNR) and for high- dimensional systems. On the other hand, linear and non-linear receivers, Minimum Mean Square Error (MMSE), and MMSE Decision-Feedback Equalization (DFE), provide the lowest decoding complexity but unfortunately with poor performance. Several studies works have been conducted in the last years to address the problem of designing low complexity decoders for the MIMO channel that can achieve near optimal performance. It was found that sequential decoders using backward tree search can bridge the gap between SD and MMSE. The sequential decoder provides an interesting performance-complexity trade-off using a bias term. Yet, the sequential decoder still suffers from high complexity for mid-to-high SNR values. In this work, we propose a new algorithm for Bidirectional Fano sequential Decoder (BFD) in order to reduce the mid-to-high SNR complexity. Our algorithm consists of first constructing a unidirectional Sequential Decoder based on forward search using the QL decomposition. After that, BFD incorporates two searches, forward and backward, to work simultaneously till they merge and find the closest lattice point to the
CONIFERS: a neutronics code for reactors with channels
International Nuclear Information System (INIS)
Davis, R.S.
1977-04-01
CONIFERS is a neutronics code for nuclear reactors whose fuel is in channels that are separated from each other by several neutron mean-free-path lengths of moderator. It can treat accurately situations in which the usual homogenized-cell diffusion equation becomes inaccurate, but is more economical than other advanced methods such as response-matrix and source-sink formalisms. CONIFERS uses exact solutions of the neutron diffusion equation within each cell. It allows for the breakdown of this equation near a channel by means of data that almost any cell code can supply. It uses the results of these cell analyses in a reactor equations set that is as readily solvable as the familiar finite-difference equations set. CONIFERS can model almost any configuration of channels and other structures in two or three dimensions. It can use any number of energy groups and any reactivity scales, including scales based on control operations. It is also flexible from a programming point of view, and has convenient input and output provisions. (author)
Circular codes revisited: a statistical approach.
Gonzalez, D L; Giannerini, S; Rosa, R
2011-04-21
In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.
Performance analysis of LDPC codes on OOK terahertz wireless channels
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
Towards Holography via Quantum Source-Channel Codes
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
Directory of Open Access Journals (Sweden)
Ghandi MM
2006-01-01
Full Text Available This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC or hierarchical quadrature amplitude modulation (HQAM can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
On Predictive Coding for Erasure Channels Using a Kalman Framework
DEFF Research Database (Denmark)
Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang
2009-01-01
We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...... signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...... and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio...
A Microfluidic Approach for Studying Piezo Channels.
Maneshi, M M; Gottlieb, P A; Hua, S Z
2017-01-01
Microfluidics is an interdisciplinary field intersecting many areas in engineering. Utilizing a combination of physics, chemistry, biology, and biotechnology, along with practical applications for designing devices that use low volumes of fluids to achieve high-throughput screening, is a major goal in microfluidics. Microfluidic approaches allow the study of cells growth and differentiation using a variety of conditions including control of fluid flow that generates shear stress. Recently, Piezo1 channels were shown to respond to fluid shear stress and are crucial for vascular development. This channel is ideal for studying fluid shear stress applied to cells using microfluidic devices. We have developed an approach that allows us to analyze the role of Piezo channels on any given cell and serves as a high-throughput screen for drug discovery. We show that this approach can provide detailed information about the inhibitors of Piezo channels. Copyright © 2017 Elsevier Inc. All rights reserved.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
Network Coded Cooperation Over Time-Varying Channels
DEFF Research Database (Denmark)
Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Barros, João
2014-01-01
transmissions, e.g., in terms of the rate of packet transmission or the energy consumption. A comprehensive analysis of the MDP solution is carried out under different network conditions to extract optimal rules of packet transmission. Inspired by the extracted rules, we propose two near-optimal heuristics......In this paper, we investigate the optimal design of cooperative network-coded strategies for a three-node wireless network with time-varying, half-duplex erasure channels. To this end, we formulate the problem of minimizing the total cost of transmitting M packets from source to two receivers...... as a Markov Decision Process (MDP). The actions of the MDP model include the source and the type of transmission to be used in a given time slot given perfect knowledge of the system state. The cost of packet transmission is defined such that it can incorporate the difference between broadcast and unicast...
Transmission over UWB channels with OFDM system using LDPC coding
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System
Directory of Open Access Journals (Sweden)
Raulefs Ronald
2004-01-01
Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.
Approaches to simulate channel and fuel behaviour using CATHENA and ELOCA
International Nuclear Information System (INIS)
Sabourin, G.; Huynh, H.M.
1996-01-01
This paper documents a new approach where the detailed fuel and channel thermalhydraulic calculations are performed by an integrated code. The thermalhydraulic code CATHENA is coupled with the fuel code ELOCA. The scenario used in the simulations is a 100% pump suction break, because its power pulse is large and leads to high sheath temperatures. The results shows that coupling the two codes at each time step can have an important effect on parameters such as the sheath, fuel and pressure tube temperature. In summary, this demonstrates that this original approach can model more adequately the channel and fuel behaviour under postulated large LOCAs. (author)
Optimization of Coding of AR Sources for Transmission Across Channels with Loss
DEFF Research Database (Denmark)
Arildsen, Thomas
Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...
HYTRAN: hydraulic transient code for investigating channel flow stability
International Nuclear Information System (INIS)
Kao, H.S.; Cardwell, W.R.; Morgan, C.D.
1976-01-01
HYTRAN is an analytical program used to investigate the possibility of hydraulic oscillations occurring in a reactor flow channel. The single channel studied is ordinarily the hot channel in the reactor core, which is parallel to other channels and is assumed to share a constant pressure drop with other channels. Since the channel of highest thermal state is studied, provision is made for two-phase flow that can cause a flow instability in the channel. HYTRAN uses the CHATA(1) program to establish a steady-state condition. A heat flux perturbation is then imposed on the channel, and the flow transient is calculated as a function of time
A Network Coding Approach to Loss Tomography
DEFF Research Database (Denmark)
Sattari, Pegah; Markopoulou, Athina; Fragouli, Christina
2013-01-01
network coding capabilities. We design a framework for estimating link loss rates, which leverages network coding capabilities and we show that it improves several aspects of tomography, including the identifiability of links, the tradeoff between estimation accuracy and bandwidth efficiency......, and the complexity of probe path selection. We discuss the cases of inferring the loss rates of links in a tree topology or in a general topology. In the latter case, the benefits of our approach are even more pronounced compared to standard techniques but we also face novel challenges, such as dealing with cycles...
Risk Modelling for Passages in Approach Channel
Directory of Open Access Journals (Sweden)
Leszek Smolarek
2013-01-01
Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.
BCM-2.0 - The new version of computer code ;Basic Channeling with Mathematica©;
Abdrashitov, S. V.; Bogdanov, O. V.; Korotchenko, K. B.; Pivovarov, Yu. L.; Rozhkova, E. I.; Tukhfatullin, T. A.; Eikhorn, Yu. L.
2017-07-01
The new symbolic-numerical code devoted to investigation of the channeling phenomena in periodic potential of a crystal has been developed. The code has been written in Wolfram Language taking advantage of analytical programming method. Newly developed different packages were successfully applied to simulate scattering, radiation, electron-positron pair production and other effects connected with channeling of relativistic particles in aligned crystal. The result of the simulation has been validated against data from channeling experiments carried out at SAGA LS.
Impact of intra-ﬂow network coding on the relay channel performance: an analytical study
Apavatjrut , Anya; Goursaud , Claire; Jaffrès-Runser , Katia; Gorce , Jean-Marie
2012-01-01
International audience; One of the most powerful ways to achieve trans- mission reliability over wireless links is to employ efﬁcient coding techniques. This paper investigates the performance of a transmission over a relay channel where information is protected by two layers of coding. In the ﬁrst layer, transmission reliability is ensured by fountain coding at the source. The second layer incorporates network coding at the relay node. Thus, fountain coded packets are re-encoded at the relay...
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Fossorier Marc
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Marc Fossorier
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
An algebraic approach to graph codes
DEFF Research Database (Denmark)
Pinero, Fernando
This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
Mimicking multi-channel scattering with single-channel approaches
Grishkevich, Sergey; Schneider, Philipp-Immanuel; Vanne, Yulian V.; Saenz, Alejandro
2009-01-01
The collision of two atoms is an intrinsic multi-channel (MC) problem as becomes especially obvious in the presence of Feshbach resonances. Due to its complexity, however, single-channel (SC) approximations, which reproduce the long-range behavior of the open channel, are often applied in calculations. In this work the complete MC problem is solved numerically for the magnetic Feshbach resonances (MFRs) in collisions between generic ultracold 6Li and 87Rb atoms in the ground state and in the ...
Analysis of Coded FHSS Systems with Multiple Access Interference over Generalized Fading Channels
Directory of Open Access Journals (Sweden)
Salam A. Zummo
2009-02-01
Full Text Available We study the effect of interference on the performance of coded FHSS systems. This is achieved by modeling the physical channel in these systems as a block fading channel. In the derivation of the bit error probability over Nakagami fading channels, we use the exact statistics of the multiple access interference (MAI in FHSS systems. Due to the mathematically intractable expression of the Rician distribution, we use the Gaussian approximation to derive the error probability of coded FHSS over Rician fading channel. The effect of pilot-aided channel estimation is studied for Rician fading channels using the Gaussian approximation. From this, the optimal hopping rate in coded FHSS is approximated. Results show that the performance loss due to interference increases as the hopping rate decreases.
The development and application of a sub-channel code in ocean environment
International Nuclear Information System (INIS)
Wu, Pan; Shan, Jianqiang; Xiang, Xiong; Zhang, Bo; Gou, Junli; Zhang, Bin
2016-01-01
Highlights: • A sub-channel code named ATHAS/OE is developed for nuclear reactors in ocean environment. • ATHAS/OE is verified by another modified sub-channel code based on COBRA-IV. • ATHAS/OE is used to analyze thermal hydraulic of a typical SMR in heaving and rolling motion. • Calculation results show that ocean condition affect the thermal hydraulic of a reactor significantly. - Abstract: An upgraded version of ATHAS sub-channel code ATHAS/OE is developed for the investigation of the thermal hydraulic behavior of nuclear reactor core in ocean environment with consideration of heaving and rolling motion effect. The code is verified by another modified sub-channel code based on COBRA-IV and used to analyze the thermal hydraulic characteristics of a typical SMR under heaving and rolling motion condition. The calculation results show that the heaving and rolling motion affect the thermal hydraulic behavior of a reactor significantly.
Jointly Decoded Raptor Codes: Analysis and Design for the BIAWGN Channel
Directory of Open Access Journals (Sweden)
Venkiah Auguste
2009-01-01
Full Text Available Abstract We are interested in the analysis and optimization of Raptor codes under a joint decoding framework, that is, when the precode and the fountain code exchange soft information iteratively. We develop an analytical asymptotic convergence analysis of the joint decoder, derive an optimization method for the design of efficient output degree distributions, and show that the new optimized distributions outperform the existing ones, both at long and moderate lengths. We also show that jointly decoded Raptor codes are robust to channel variation: they perform reasonably well over a wide range of channel capacities. This robustness property was already known for the erasure channel but not for the Gaussian channel. Finally, we discuss some finite length code design issues. Contrary to what is commonly believed, we show by simulations that using a relatively low rate for the precode , we can improve greatly the error floor performance of the Raptor code.
An Evaluation of Automated Code Generation with the PetriCode Approach
DEFF Research Database (Denmark)
Simonsen, Kent Inge
2014-01-01
Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....
H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach
Directory of Open Access Journals (Sweden)
Wei Wang
2013-01-01
Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.
Approach to transverse equilibrium in axial channeling
International Nuclear Information System (INIS)
Fearick, R.W.
2000-01-01
Analytical treatments of channeling rely on the assumption of equilibrium on the transverse energy shell. The approach to equilibrium, and the nature of the equilibrium achieved, is examined using solutions of the equations of motion in the continuum multi-string model. The results show that the motion is chaotic in the absence of dissipative processes, and a complicated structure develops in phase space which prevent the development of the simple equilibrium usually assumed. The role of multiple scattering in smoothing out the equilibrium distribution is investigated
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
How could the replica method improve accuracy of performance assessment of channel coding?
Energy Technology Data Exchange (ETDEWEB)
Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp
2009-12-01
We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.
Development and application of sub-channel analysis code based on SCWR core
International Nuclear Information System (INIS)
Fu Shengwei; Xu Zhihong; Yang Yanhua
2011-01-01
The sub-channel analysis code SABER was developed for thermal-hydraulic analysis of supercritical water-cooled reactor (SCWR) fuel assembly. The extended computational cell structure, a new boundary conditions, 3 dimensional heat conduction model and water properties package were implemented in SABER code, which could be used to simulate the thermal fuel assembly of SCWR. To evaluate the applicability of the code, a steady state calculation of the fuel assembly was performed. The results indicate good applicability of the SABER code to simulate the counter-current flow and the heat exchange between coolant and moderator channels. (authors)
Compression and channel-coding algorithms for high-definition television signals
Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.
1990-09-01
In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.
Douik, Ahmed S.
2015-11-05
This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.
Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2015-01-01
This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.
Use of color-coded sleeve shutters accelerates oscillograph channel selection
Bouchlas, T.; Bowden, F. W.
1967-01-01
Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.
Channel coding for underwater acoustic single-carrier CDMA communication system
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Directory of Open Access Journals (Sweden)
Savitha H. M.
2010-09-01
Full Text Available A comparison of the performance of hard and soft-decision turbo coded Orthogonal Frequency Division Multiplexing systems with Quadrature Phase Shift Keying (QPSK and 16-Quadrature Amplitude Modulation (16-QAM is considered in the first section of this paper. The results show that the soft-decision method greatly outperforms the hard-decision method. The complexity of the demapper is reduced with the use of simplified algorithm for 16-QAM demapping. In the later part of the paper, we consider the transmission of data over additive white class A noise (AWAN channel, using turbo coded QPSK and 16-QAM systems. We propose a novel turbo decoding scheme for AWAN channel. Also we compare the performance of turbo coded systems with QPSK and 16-QAM on AWAN channel with two different channel values- one computed as per additive white Gaussian noise (AWGN channel conditions and the other as per AWAN channel conditions. The results show that the use of appropriate channel value in turbo decoding helps to combat the impulsive noise more effectively. The proposed model for AWAN channel exhibits comparable Bit error rate (BER performance as compared to AWGN channel.
Low complexity source and channel coding for mm-wave hybrid fiber-wireless links
DEFF Research Database (Denmark)
Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan
2014-01-01
We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
Sub-channel/system coupled code development and its application to SCWR-FQT loop
International Nuclear Information System (INIS)
Liu, X.J.; Cheng, X.
2015-01-01
Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures
Sub-channel/system coupled code development and its application to SCWR-FQT loop
Energy Technology Data Exchange (ETDEWEB)
Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)
2015-04-15
Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures
A Novel Criterion for Optimum MultilevelCoding Systems in Mobile Fading Channels
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
A novel criterion that is "capac-ity rule" and "mapping rule" for the design of op-timum MLC scheme over mobile fading channels isproposed.According to this theory,the performanceof multilevel coding with multistage decoding schemes(MLC/MSD) in mobile fading channels is investi-gated,in which BCH codes are chosen as componentcodes,and three mapping strategies with 8ASK mod-ulation are used.Numerical results indicate that whencode rates of component codes in MLC scheme are de-signed based on "capacity rule",the performance ofthe system with block partitioning (BP) is optimumfor Rayleigh fading channels,while the performance ofthe system with Ungerboeck partioning (UP) is bestfor AWGN channels.
DELOCA, a code for simulation of CANDU fuel channel in thermal transients
International Nuclear Information System (INIS)
Mihalache, M.; Florea, Silviu; Ionescu, V.; Pavelescu, M.
2005-01-01
Full text: In certain LOCA scenarios into the CANDU fuel channel, the ballooning of the pressure tube and the contact with the calandria tube can occur. After the contact moment, a radial heat transfer from cooling fluid to moderator arises through the contact area. If the temperature of channel walls increases, the contact area is drying, the heat transfer becomes inefficiently and the fuel channel could lose its integrity. DELOCA code was developed to simulate the mechanical behaviour of pressure tube during pre-contact transition, and mechanical and thermal behaviour of pressure tube and calandria tube after the contact between the two tubes. The code contains a few models: the creep of Zr-2.5%Nb alloy, the heat transfer by conduction through the cylindrical walls, channel failure criteria and calculus of heat transfer at the calandria tube - moderator interface. This code evaluates the contact and channel failure moments. This code was systematically verified by Contact1 and Cathena codes. This paper presents the results obtained at different temperature increasing rates. In addition, the contact moment for a RIH 5% postulated accident was calculated. The Cathena thermo-hydraulic code provided the input data. (authors)
DELOCA, a code for simulation of CANDU fuel channel in thermal transients
International Nuclear Information System (INIS)
Mihalache, M.; Florea, Silviu; Ionescu, V.; Pavelescu, M.
2005-01-01
In certain LOCA scenarios into the CANDU fuel channel, the ballooning of the pressure tube and the contact with the calandria tube can occur. After the contact moment, a radial heat transfer from cooling fluid to moderator arises through the contact area. If the temperature of channel walls increases, the contact area is drying, the heat transfer becomes inefficiently and the fuel channel could lose its integrity. DELOCA code was developed to simulate the mechanical behaviour of pressure tube during pre-contact transition, and mechanical and thermal behaviour of pressure tube and calandria tube after the contact between the two tubes. The code contains a few models: the creep of Zr-2.5%Nb alloy, the heat transfer by conduction through the cylindrical walls, channel failure criteria and calculus of heat transfer at the calandria tube - moderator interface. This code evaluates the contact and channel failure moments. This code was systematically verified by Contact1 and Cathena codes. This paper presents the results obtained at different temperature increasing rates. In addition, the contact moment for a RIH 5% postulated accident was calculated. The Cathena thermo-hydraulic code provided the input data. (authors)
Phi Photoproduction in a Coupled-Channel Approach
Ozaki, S.; Nagahiro, H.; Hosaka, A.; Scholten, O.
2010-01-01
We investigate photoproduction of phi-mesons off protons within a coupled-channel effective-Lagrangian method which is based on the K-matrix approach. We take into account pi N, rho N, eta N, K Lambda, K Sigma, K Lambda (1520) and phi N channels. Especially we focus on K Lambda(1520) channel. We
Energy efficient rateless codes for high speed data transfer over free space optical channels
Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.
2015-03-01
Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
William G. Cowley
2008-04-01
Full Text Available This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of 2ÃÂ€/M radians, where Ã¢Â€ÂœMÃ¢Â€Â is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over Ã¢Â„Â¤4 with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of 2Ã¢ÂˆÂ˜ standard deviation per symbol with small loss.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
Karuppasami Sridhar
2008-01-01
Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.
An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem
Directory of Open Access Journals (Sweden)
Tu Zhenyu
2005-01-01
Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel
DEFF Research Database (Denmark)
Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe
2016-01-01
This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels
Directory of Open Access Journals (Sweden)
IMTAWIL, V.
2011-11-01
Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.
LDPC code decoding adapted to the precoded partial response magnetic recording channels
International Nuclear Information System (INIS)
Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-01-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems
LDPC code decoding adapted to the precoded partial response magnetic recording channels
Energy Technology Data Exchange (ETDEWEB)
Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-05-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.
Error-Rate Bounds for Coded PPM on a Poisson Channel
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2012-03-01
Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.
Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.
Rice, R. F.
1972-01-01
The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.
Channel coding study for ultra-low power wireless design of autonomous sensor works
Zhang, P.; Huang, Li; Willems, F.M.J.
2011-01-01
Ultra-low power wireless design is highly demanded for building up autonomous wireless sensor networks (WSNs) for many application areas. To keep certain quality of service with limited power budget, channel coding techniques can be applied to maintain the robustness and reliability of WSNs. In this
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Directory of Open Access Journals (Sweden)
Kai Lin
2016-07-01
Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.
A multiobjective approach to the genetic code adaptability problem.
de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato
2015-02-19
The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach
Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels
DEFF Research Database (Denmark)
Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.
2014-01-01
We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...
The fuel and channel thermal/mechanical behaviour code FACTAR 2.0 (LOCA)
International Nuclear Information System (INIS)
Westbye, C.J.; Mackinnon, J.C.; Gu, B.W.
1996-01-01
The computer code FACTAR 2.0 (LOCA) models the thermal and mechanical response of components within a single CANDU fuel channel under loss-of-coolant accident conditions. This code version is the successor to the FACTAR 1.x code series, and features many modelling enhancements over its predecessor. In particular, the thermal hydraulic treatment has been extended to model reverse and bi-directional coolant flow, and the axial variation in coolant flow rate. Thermal radiation is calculated by a detailed surface-to-surface model, and the ability to represent a greater range of geometries (including experimental configurations employed in code validation) has been implemented. Details of these new code treatments are described in this paper. (author)
Improved virtual channel noise model for transform domain Wyner-Ziv video coding
DEFF Research Database (Denmark)
Huang, Xin; Forchhammer, Søren
2009-01-01
Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....
Development of a computer code for thermohydraulic analysis of a heated channel in transients
International Nuclear Information System (INIS)
Jafari, J.; Kazeminejad, H.; Davilu, H.
2004-01-01
This paper discusses the thermohydraulic analysis of a heated channel of a nuclear reactor in transients by a computer code that has been developed by the writer. The considered geometry is a channel of a nuclear reactor with cylindrical or planar fuel rods. The coolant is water and flows from the outer surface of the fuel rod. To model the heat transfer in the fuel rod, two dimensional time dependent conduction equations has been solved by combination of numerical methods, O rthogonal Collocation Method in radial direction and finite difference method in axial direction . For coolant modelling the single phase time dependent energy equation has been used and solved by finite difference method . The combination of the first module that solves the conduction in the fuel rod and a second one that solved the energy balance in the coolant region constitute the computer code (Thyc-1) to analysis thermohydraulic of a heated channel in transients. The Orthogonal collocation method maintains the accuracy and computing time of conventional finite difference methods, while the computer storage is reduced by a factor of two. The same problem has been modelled by RELAP5/M3 system code to asses the validity of the Thyc-1 code. The good agreement of the results qualifies the developed code
Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih
2016-04-21
Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.
Delay reduction in persistent erasure channels for generalized instantly decodable network coding
Sorour, Sameh
2013-06-01
In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). By persistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, the channel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We first formulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard, we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certain ranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNC algorithms. © 2013 IEEE.
Delay reduction in persistent erasure channels for generalized instantly decodable network coding
Sorour, Sameh; Aboutorab, Neda; Sadeghi, Parastoo; Karim, Mohammad Shahriar; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2013-01-01
In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). By persistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, the channel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We first formulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard, we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certain ranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNC algorithms. © 2013 IEEE.
A decoupling approach to classical data transmission over quantum channels
DEFF Research Database (Denmark)
Dupont-Dupuis, Fréderic; Szehr, Oleg; Tomamichel, Marco
2014-01-01
be solved this way, one of the most basic coding problems remains impervious to a direct application of this method, sending classical information through a quantum channel. We will show that this problem can, in fact, be solved using decoupling ideas, specifically by proving a dequantizing theorem, which...
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad
2013-05-27
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim; Zafar, Ammar
2013-01-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
The coding theorem for a class of quantum channels with long-term memory
International Nuclear Information System (INIS)
Datta, Nilanjana; Dorlas, Tony C
2007-01-01
In this paper, we consider the transmission of classical information through a class of quantum channels with long-term memory, which are convex combinations of memoryless channels. Hence, the memory of such channels can be considered to be given by a Markov chain which is aperiodic but not irreducible. We prove the coding theorem and weak converse for this class of channels. The main techniques that we employ are a quantum version of Feinstein's fundamental lemma (Feinstein A 1954 IRE Trans. PGIT 4 2-22, Khinchin A I 1957 Mathematical Foundations of Information Theory: II. On the Fundamental Theorems of Information Theory (New York: Dover) chapter IV) and a generalization of Helstrom's theorem (Helstrom C W 1976 Quantum detection and estimation theory Mathematics in Science and Engineering vol 123 (London: Academic))
Space-Time Coded MC-CDMA: Blind Channel Estimation, Identifiability, and Receiver Design
Directory of Open Access Journals (Sweden)
Li Hongbin
2002-01-01
Full Text Available Integrating the strengths of multicarrier (MC modulation and code division multiple access (CDMA, MC-CDMA systems are of great interest for future broadband transmissions. This paper considers the problem of channel identification and signal combining/detection schemes for MC-CDMA systems equipped with multiple transmit antennas and space-time (ST coding. In particular, a subspace based blind channel identification algorithm is presented. Identifiability conditions are examined and specified which guarantee unique and perfect (up to a scalar channel estimation when knowledge of the noise subspace is available. Several popular single-user based signal combining schemes, namely the maximum ratio combining (MRC and the equal gain combining (EGC, which are often utilized in conventional single-transmit-antenna based MC-CDMA systems, are extended to the current ST-coded MC-CDMA (STC-MC-CDMA system to perform joint combining and decoding. In addition, a linear multiuser minimum mean-squared error (MMSE detection scheme is also presented, which is shown to outperform the MRC and EGC at some increased computational complexity. Numerical examples are presented to evaluate and compare the proposed channel identification and signal detection/combining techniques.
Validation of system codes RELAP5 and SPECTRA for natural convection boiling in narrow channels
Energy Technology Data Exchange (ETDEWEB)
Stempniewicz, M.M., E-mail: stempniewicz@nrg.eu; Slootman, M.L.F.; Wiersema, H.T.
2016-10-15
Highlights: • Computer codes RELAP5/Mod3.3 and SPECTRA 3.61 validated for boiling in narrow channels. • Validated codes can be used for LOCA analyses in research reactors. • Code validation based on natural convection boiling in narrow channels experiments. - Abstract: Safety analyses of LOCA scenarios in nuclear power plants are performed with so called thermal–hydraulic system codes, such as RELAP5. Such codes are validated for typical fuel geometries applied in nuclear power plants. The question considered by this article is if the codes can be applied for LOCA analyses in research reactors, in particular exceeding CHF in very narrow channels. In order to answer this question, validation calculations were performed with two thermal–hydraulic system codes: RELAP and SPECTRA. The validation was based on natural convection boiling in narrow channels experiments, performed by Prof. Monde et al. in the years 1990–2000. In total 42 vertical tube and annulus experiments were simulated with both codes. A good agreement of the calculated values with the measured data was observed. The main conclusions are: • The computer codes RELAP5/Mod 3.3 (US NRC version) and SPECTRA 3.61 have been validated for natural convection boiling in narrow channels using experiments of Monde. The dimensions applied in the experiments were performed for a range that covers the values observed in typical research reactors. Therefore it is concluded that both codes are validated and can be used for LOCA analyses in research reactors, including natural convection boiling. The applicability range of the present validation is: hydraulic diameters of 1.1 ⩽ D{sub hyd} ⩽ 9.0 mm, heated lengths of 0.1 ⩽ L ⩽ 1.0 m, pressures of 0.10 ⩽ P ⩽ 0.99 MPa. In most calculations the burnout was predicted to occur at lower power than that observed in the experiments. In several cases the burnout was observed at higher power. The overprediction was not larger than 16% in RELAP and 15% in
BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL
Directory of Open Access Journals (Sweden)
Surbhi Sharma
2010-06-01
Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the ‘m’ parameter increases (Less fading.
Position-based coding and convex splitting for private communication over quantum channels
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
Directory of Open Access Journals (Sweden)
Sonia Aïssa
2008-05-01
Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.
SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations
Energy Technology Data Exchange (ETDEWEB)
Adams, C. H.
1976-07-01
This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center.
Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel
Directory of Open Access Journals (Sweden)
Declercq David
2007-01-01
Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.
SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations
International Nuclear Information System (INIS)
Adams, C.H.
1976-07-01
This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center
Large Eddy Simulation of turbulent flows in compound channels with a finite element code
International Nuclear Information System (INIS)
Xavier, C.M.; Petry, A.P.; Moeller, S.V.
2011-01-01
This paper presents the numerical investigation of the developing flow in a compound channel formed by a rectangular main channel and a gap in one of the sidewalls. A three dimensional Large Eddy Simulation computational code with the classic Smagorinsky model is introduced, where the transient flow is modeled through the conservation equations of mass and momentum of a quasi-incompressible, isothermal continuous medium. Finite Element Method, Taylor-Galerkin scheme and linear hexahedrical elements are applied. Numerical results of velocity profile show the development of a shear layer in agreement with experimental results obtained with Pitot tube and hot wires. (author)
Performance analysis for a chaos-based code-division multiple access system in wide-band channel
Directory of Open Access Journals (Sweden)
Ciprian Doru Giurcăneanu
2015-08-01
Full Text Available Code-division multiple access technology is widely used in telecommunications and its performance has been extensively investigated in the past. Theoretical results for the case of wide-band transmission channel were not available until recently. The novel formulae which have been published in 2014 can have an important impact on the future of wireless multiuser communications, but limitations come from the Gaussian approximations used in their derivation. In this Letter, the authors obtain more accurate expressions of the bit error rate (BER for the case when the model of the wide-band channel is two-ray, with Rayleigh fading. In the authors’ approach, the spreading sequences are assumed to be generated by logistic map given by Chebyshev polynomial function of order two. Their theoretical and experimental results show clearly that the previous results on BER, which rely on the crude Gaussian approximation, are over-pessimistic.
Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392
International Nuclear Information System (INIS)
Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep
2015-01-01
Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)
Joint beam design and user selection over non-binary coded MIMO interference channel
Li, Haitao; Yuan, Haiying
2013-03-01
In this paper, we discuss the problem of sum rate improvement for coded MIMO interference system, and propose joint beam design and user selection over interference channel. Firstly, we have formulated non-binary LDPC coded MIMO interference networks model. Then, the least square beam design for MIMO interference system is derived, and the low complexity user selection is presented. Simulation results confirm that the sum rate can be improved by the joint user selection and beam design comparing with single interference aligning beamformer.
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels
Directory of Open Access Journals (Sweden)
O. Al Rasheed
2013-11-01
Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.
System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels
Directory of Open Access Journals (Sweden)
Kam Tai Chan
2005-05-01
Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time.
On the calculation of the minimax-converse of the channel coding problem
Elkayam, Nir; Feder, Meir
2015-01-01
A minimax-converse has been suggested for the general channel coding problem by Polyanskiy etal. This converse comes in two flavors. The first flavor is generally used for the analysis of the coding problem with non-vanishing error probability and provides an upper bound on the rate given the error probability. The second flavor fixes the rate and provides a lower bound on the error probability. Both converses are given as a min-max optimization problem of an appropriate binary hypothesis tes...
Directory of Open Access Journals (Sweden)
Ser Javier Del
2005-01-01
Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.
Development, verification and validation of the fuel channel behaviour computer code FACTAR
Energy Technology Data Exchange (ETDEWEB)
Westbye, C J; Brito, A C; MacKinnon, J C; Sills, H E; Langman, V J [Ontario Hydro, Toronto, ON (Canada)
1996-12-31
FACTAR (Fuel And Channel Temperature And Response) is a computer code developed to simulate the transient thermal and mechanical behaviour of 37-element or 28-element fuel bundles within a single CANDU fuel channel for moderate loss of coolant accident conditions including transition and large break LOCA`s (loss of coolant accidents) with emergency coolant injection assumed available. FACTAR`s predictions of fuel temperature and sheath failure times are used to subsequent assessment of fission product releases and fuel string expansion. This paper discusses the origin and development history of FACTAR, presents the mathematical models and solution technique, the detailed quality assurance procedures that are followed during development, and reports the future development of the code. (author). 27 refs., 3 figs.
On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels
Zafar, Ammar
2013-02-20
In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
An upper bound for codes for the noisy two-access binary adder channel
Tilborg, van H.C.A.
1986-01-01
Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +
Dynamical coupled channel approach to omega meson production
Energy Technology Data Exchange (ETDEWEB)
Mark Paris
2007-09-10
The dynamical coupled channel approach of Matsuyama, Sato, and Lee is used to study the $\\omega$--meson production induced by pions and photons scattering from the proton. The parameters of the model are fixed in a two-channel (\\omega N,\\pi N) calculation for the non-resonant and resonant contributions to the $T$ matrix by fitting the available unpolarized differential cross section data. The polarized photon beam asymmetry is predicted and compared to existing data.
Capacity-Approaching Superposition Coding for Optical Fiber Links
DEFF Research Database (Denmark)
Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso
2014-01-01
We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....
Improving 3D-Turbo Code's BER Performance with a BICM System over Rayleigh Fading Channel
Directory of Open Access Journals (Sweden)
R. Yao
2016-12-01
Full Text Available Classical Turbo code suffers from high error floor due to its small Minimum Hamming Distance (MHD. Newly-proposed 3D-Turbo code can effectively increase the MHD and achieve a lower error floor by adding a rate-1 post encoder. In 3D-Turbo codes, part of the parity bits from the classical Turbo encoder are further encoded through the post encoder. In this paper, a novel Bit-Interleaved Coded Modulation (BICM system is proposed by combining rotated mapping Quadrature Amplitude Modulation (QAM and 3D-Turbo code to improve the Bit Error Rate (BER performance of 3D-Turbo code over Raleigh fading channel. A key-bit protection scheme and a Two-Dimension (2D iterative soft demodulating-decoding algorithm are developed for the proposed BICM system. Simulation results show that the proposed system can obtain about 0.8-1.0 dB gain at BER of 10^{-6}, compared with the existing BICM system with Gray mapping QAM.
An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels
Directory of Open Access Journals (Sweden)
Jian Li
2005-04-01
Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
RBMK fuel channel blockage analysis by MCNP5, DRAGON and RELAP5-3D codes
International Nuclear Information System (INIS)
Parisi, C.; D'Auria, F.
2007-01-01
The aim of this work was to perform precise criticality analyses by Monte-Carlo code MCNP5 for a Fuel Channel (FC) flow blockage accident, considering as calculation domain a single FC and a 3x3 lattice of RBMK cells. Boundary conditions for MCNP5 input were derived by a previous transient calculation by state-of-the-art codes HELIOS/RELAP5-3D. In a preliminary phase, suitable MCNP5 models of a single cell and of a small lattice of RBMK cells were set-up; criticality analyses were performed at reference conditions for 2.0% and 2.4% enriched fuel. These analyses were compared with results obtained by University of Pisa (UNIPI) using deterministic transport code DRAGON and with results obtained by NIKIET Institute using MCNP4C. Then, the changes of the main physical parameters (e.g. fuel and water/steam temperature, water density, graphite temperature) at different time intervals of the FC blockage transient were evaluated by a RELAP5-3D calculation. This information was used to set up further MCNP5 inputs. Criticality analyses were performed for different systems (single channel and lattice) at those transient' states, obtaining global criticality versus transient time. Finally the weight of each parameter's change (fuel overheating and channel voiding) on global criticality was assessed. The results showed that reactivity of a blocked FC is always negative; nevertheless, when considering the effect of neighboring channels, the global reactivity trend reverts, becoming slightly positive or not changing at all, depending in inverse relation to the fuel enrichment. (author)
Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels
Chelli, Ali
2018-01-15
This paper proposes the use of hybrid automatic repeat request (HARQ) with code combining (HARQ-CC) to offer reliable communications over double Rayleigh channels. The double Rayleigh fading channel is of particular interest to vehicle-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the
Abediseid, Walid
2012-01-01
complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder's computational complexity. We show that when the computational complexity exceeds
A thermal-hydraulic code for transient analysis in a channel with a rod bundle
International Nuclear Information System (INIS)
Khodjaev, I.D.
1995-01-01
The paper contains the model of transient vapor-liquid flow in a channel with a rod bundle of core of a nuclear power plant. The computer code has been developed to predict dryout and post-dryout heat transfer in rod bundles of nuclear reactor core under loss-of-coolant accidents. Economizer, bubble, dispersed-annular and dispersed regimes are taken into account. The computer code provides a three-field representation of two-phase flow in the dispersed-annular regime. Continuous vapor, continuous liquid film and entrained liquid drops are three fields. For the description of dispersed flow regime two-temperatures and single-velocity model is used. Relative droplet motion is taken into account for the droplet-to-vapor heat transfer. The conservation equations for each of regimes are solved using an effective numerical technique. This technique makes it possible to determine distribution of the parameters of flows along the perimeter of fuel elements. Comparison of the calculated results with the experimental data shows that the computer code adequately describes complex processes in a channel with a rod bundle during accident
A thermal-hydraulic code for transient analysis in a channel with a rod bundle
Energy Technology Data Exchange (ETDEWEB)
Khodjaev, I.D. [Research & Engineering Centre of Nuclear Plants Safety, Electrogorsk (Russian Federation)
1995-09-01
The paper contains the model of transient vapor-liquid flow in a channel with a rod bundle of core of a nuclear power plant. The computer code has been developed to predict dryout and post-dryout heat transfer in rod bundles of nuclear reactor core under loss-of-coolant accidents. Economizer, bubble, dispersed-annular and dispersed regimes are taken into account. The computer code provides a three-field representation of two-phase flow in the dispersed-annular regime. Continuous vapor, continuous liquid film and entrained liquid drops are three fields. For the description of dispersed flow regime two-temperatures and single-velocity model is used. Relative droplet motion is taken into account for the droplet-to-vapor heat transfer. The conservation equations for each of regimes are solved using an effective numerical technique. This technique makes it possible to determine distribution of the parameters of flows along the perimeter of fuel elements. Comparison of the calculated results with the experimental data shows that the computer code adequately describes complex processes in a channel with a rod bundle during accident.
Substrate channel in nitrogenase revealed by a molecular dynamics approach.
Smith, Dayle; Danyal, Karamatullah; Raugei, Simone; Seefeldt, Lance C
2014-04-15
Mo-dependent nitrogenase catalyzes the biological reduction of N2 to two NH3 molecules at FeMo-cofactor buried deep inside the MoFe protein. Access of substrates, such as N2, to the active site is likely restricted by the surrounding protein, requiring substrate channels that lead from the surface to the active site. Earlier studies on crystallographic structures of the MoFe protein have suggested three putative substrate channels. Here, we have utilized submicrosecond atomistic molecular dynamics simulations to allow the nitrogenase MoFe protein to explore its conformational space in an aqueous solution at physiological ionic strength, revealing a putative substrate channel. The viability of this observed channel was tested by examining the free energy of passage of N2 from the surface through the channel to FeMo-cofactor, resulting in the discovery of a very low energy barrier. These studies point to a viable substrate channel in nitrogenase that appears during thermal motions of the protein in an aqueous environment and that approaches a face of FeMo-cofactor earlier implicated in substrate binding.
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
On the performance of diagonal lattice space-time codes for the quasi-static MIMO channel
Abediseid, Walid; Alouini, Mohamed-Slim
2013-01-01
There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple-output (MIMO) channel. All the coding design to date focuses on either high-performance, high rates, low complexity encoding and decoding
Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel
Directory of Open Access Journals (Sweden)
M. Rezaei
2016-03-01
Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.
Indian Academy of Sciences (India)
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
International Nuclear Information System (INIS)
Liu, X.J.; Yang, T.; Cheng, X.
2014-01-01
To analyze the local thermal-hydraulic parameters in the supercritical water reactor-fuel qualification test (SCWR-FQT) fuel bundle with a flow blockage, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code and system code are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal-hydraulic parameters are predicted by the sub-channel code COBRA-SC. Sensitivity analysis are carried out respectively in ATHLET-SC and COBRA-SC code, to identify the appropriate models for description of the flow blockage phenomenon in the test loop. Some measures to mitigate the accident consequence are also trialed to demonstrate their effectiveness. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel assembly can be reduced effectively by the safety measures of SCWR-FQT. (author)
Zhao, Yaqin; Zhong, Xin; Wu, Di; Zhang, Ye; Ren, Guanghui; Wu, Zhilu
2013-09-01
Optical code-division multiple access (OCDMA) systems usually allocate orthogonal or quasi-orthogonal codes to the active users. When transmitting through atmospheric scattering channel, the coding pulses are broadened and the orthogonality of the codes is worsened. In truly asynchronous case, namely both the chips and the bits are asynchronous among each active user, the pulse broadening affects the system performance a lot. In this paper, we evaluate the performance of a 2D asynchronous hard-limiting wireless OCDMA system through atmospheric scattering channel. The probability density function of multiple access interference in truly asynchronous case is given. The bit error rate decreases as the ratio of the chip period to the root mean square delay spread increases and the channel limits the bit rate to different levels when the chip period varies.
Investigation of flow blockage in a fuel channel with the ASSERT subchannel code
International Nuclear Information System (INIS)
Harvel, G.D.; Dam, R.; Soulard, M.
1996-01-01
On behalf of New Brunswick Power, a study was undertaken to determine if safe operation of a CANDU-6 reactor can be maintained at low reactor powers with the presence of debris in the fuel channels. In particular, the concern was to address if a small blockage due to the presence of debris would cause a significant reduction in dryout powers, and hence, to determine the safe operation power level to maintain dryout margins. In this work the NUCIRC(1,2), ASSERT-IV(3), and ASSERT-PV(3) computer codes are used in conjunction with a pool boiling model to determine the safe operation power level which maintains dryout safety margins. NUCIRC is used to provide channel boundary conditions for the ASSERTcodes and to select a representative channel for analysis. This pool boiling model is provided as a limiting lower bound analysis. As expected, the ASSERT results predict higher CHF ratios than the pool boiling model. In general, the ASSERT results show that as the model comes closer to modelling a complete blockage it reduces toward, but does not reach the pool boiling model. (author)
New approach to derive linear power/burnup history input for CANDU fuel codes
International Nuclear Information System (INIS)
Lac Tang, T.; Richards, M.; Parent, G.
2003-01-01
The fuel element linear power / burnup history is a required input for the ELESTRES code in order to simulate CANDU fuel behavior during normal operating conditions and also to provide input for the accident analysis codes ELOCA and SOURCE. The purpose of this paper is to present a new approach to derive 'true', or at least more realistic linear power / burnup histories. Such an approach can be used to recreate any typical bundle power history if only a single pair of instantaneous values of bundle power and burnup, together with the position in the channel, are known. The histories obtained could be useful to perform more realistic simulations for safety analyses for cases where the reference (overpower) history is not appropriate. (author)
Joint nonbinary low-density parity-check codes and modulation diversity over fading channels
Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei
2010-09-01
A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels.
Benchmark evaluation of the RELAP code to calculate boiling in narrow channels
International Nuclear Information System (INIS)
Kunze, J.F.; Loyalka, S.K.; McKibben, J.C.; Hultsch, R.; Oladiran, O.
1990-01-01
The RELAP code has been tested with benchmark experiments (such as the loss-of-fluid test experiments at the Idaho National Engineering Laboratory) at high pressures and temperatures characteristic of those encountered in loss-of-coolant accidents (LOCAs) in commercial light water power reactors. Application of RELAP to the LOCA analysis of a low pressure (< 7 atm) and low temperature (< 100 degree C), plate-type research reactor, such as the University of Missouri Research Reactor (MURR), the high-flux breeder reactor, high-flux isotope reactor, and Advanced Test Reactor, requires resolution of questions involving overextrapolation to very low pressures and low temperatures, and calculations of the pulsed boiling/reflood conditions in the narrow rectangular cross-section channels (typically 2 mm thick) of the plate fuel elements. The practical concern of this problem is that plate fuel temperatures predicted by RELAP5 (MOD2, version 3) during the pulsed boiling period can reach high enough temperatures to cause plate (clad) weakening, though not melting. Since an experimental benchmark of RELAP under such LOCA conditions is not available and since such conditions present substantial challenges to the code, it is important to verify the code predictions. The comparison of the pulsed boiling experiments with the RELAP calculations involves both visual observations of void fraction versus time and measurements of temperatures near the fuel plate surface
Effective channel approach to nuclear scattering at high energies
International Nuclear Information System (INIS)
Rule, D.W.
1975-01-01
The description of high energy nuclear reactions is considered within the framework of the effective channel approach. A variational procedure is used to obtain an expression for the Green's function in the effective channel, which includes the average fluctuation potential, average energy, and an additional term arising from the non-commutability of the kinetic energy operator and the effective target wave function. The resulting expression for the effective channel, containing one variational parameter, is used to obtain the coupling potential. The resulting formulation is applied to the elastic scattering of 1 GeV protons by 4 He nuclei. A simple Gaussian form is used for the spin--isospin averaged proton--nucleon interaction. The variational parameter in the effective channel wave function is fixed a posteriori via the total p-- 4 He cross section. The effect of the coupling to the effective channel is demonstrated, as well as the effect of each term in the coupled equation for this channel. The calculated elastic cross sections were compared to both the recent data from Saclay and the earlier Brookhaven data for the 1-GeV p-- 4 He elastic scattering cross section. Using proton--nucleus elastic scattering experiments to study the proton--nucleon elastic scattering amplitude is discussed. The main purpose of our study is to investigate the effects on the cross section of varying, within its estimated range of uncertainty, each parameter which enters into the coupled equations. The magnitude of these effects was found to be large enough to conclude that any effects due to dynamical correlations would be obscured by the uncertainties in the input parameters
Development and assessment of a sub-channel code applicable for trans-critical transient of SCWR
International Nuclear Information System (INIS)
Liu, X.J.; Yang, T.; Cheng, X.
2013-01-01
Highlights: • A new sub-channel code COBRA-SC for SCWR is developed. • Pseudo two-phase method is employed to realize trans-critical transient calculation. • Good suitability of COBRA-SC is demonstrated by preliminary assessment. • The calculation results of COBRA-SC agree well with ATHLET code. -- Abstract: In the last few years, extensive R and D activities have been launched covering various aspects of supercritical water-cooled reactor (SCWR), especially the thermal-hydraulic analysis. Sub-channel code plays an indispensable role to predict the detail thermal-hydraulic behavior of the SCWR fuel assembly. This paper develops a new version of sub-channel code COBRA-SC based on the previous COBRA-IV code. The supercritical water property and heat transfer/pressure drop correlations under supercritical pressure are implemented to this code. Moreover, in order to simulate the trans-critical transient (the pressure undergo a decrease from the supercritical pressure to the subcritical pressure), pseudo two-phase method is employed in COBRA-SC code. This work is completed by introduction of a virtual two-phase region near the pseudo-critical line. A smooth transition of void fraction can be realized. In addition, several heat transfer correlations right underneath the critical point are introduced into this code to capture the heat transfer behavior during the trans-critical transient. Some experimental data from simple geometry, e.g. the single tube, small rod bundle, is used to validate and evaluate this new developed COBRA-SC code. The predicted results show a good agreement with the experimental data, demonstrating good feasibility of this code for SCWR condition. A code to code comparison between COBRA-SC and ATHLET for a blowdown transient of a small fuel assembly is also presented and discussed in this paper
International Nuclear Information System (INIS)
Bujan, A.; Adamik, V.; Misak, J.
1986-01-01
A brief description is presented of the expansion of the SICHTA-83 computer code for the analysis of the thermal history of the fuel channel for large LOCAs by modelling the mechanical behaviour of fuel element cladding. The new version of the code has a more detailed treatment of heat transfer in the fuel-cladding gap because it also respects the mechanical (plastic) deformations of the cladding and the fuel-cladding interaction (magnitude of contact pressure). Also respected is the change in pressure of the gas filling of the fuel element, the mechanical criterion is considered of a failure of the cladding and the degree is considered of the blockage of the through-flow cross section for coolant flow in the fuel channel. The LOCA WWER-440 model computation provides a comparison of the new SICHTA-85/MOD 1 code with the results of the original 83 version of SICHTA. (author)
Measuring propagation delay over a coded serial communication channel using FPGAs
International Nuclear Information System (INIS)
Jansweijer, P.P.M.; Peek, H.Z.
2011-01-01
Measurement and control applications are increasingly using distributed system technologies. In such applications, which may be spread over large distances, it is often necessary to synchronize system timing and know with great precision the time offsets between parts of the system. Measuring the propagation delay over a coded serial communication channel using serializer/deserializer (SerDes) functionality in FPGAs is described. The propagation delay between transmitter and receiver is measured with a resolution of a single unit interval (i.e. a serial link running at 3.125 Gbps provides a 320 ps resolution). The technique has been demonstrated to work over 100 km fibre to verify the feasibility for application in the future KM3NeT telescope.
L-type calcium channels refine the neural population code of sound level
Grimsley, Calum Alex; Green, David Brian
2016-01-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536
On the equivalence of Ising models on ‘small-world’ networks and LDPC codes on channels with memory
International Nuclear Information System (INIS)
Neri, Izaak; Skantzos, Nikos S
2014-01-01
We demonstrate the equivalence between thermodynamic observables of Ising spin-glass models on small-world lattices and the decoding properties of error-correcting low-density parity-check codes on channels with memory. In particular, the self-consistent equations for the effective field distributions in the spin-glass model within the replica symmetric ansatz are equivalent to the density evolution equations forr Gilbert–Elliott channels. This relationship allows us to present a belief-propagation decoding algorithm for finite-state Markov channels and to compute its performance at infinite block lengths from the density evolution equations. We show that loss of reliable communication corresponds to a first order phase transition from a ferromagnetic phase to a paramagnetic phase in the spin glass model. The critical noise levels derived for Gilbert–Elliott channels are in very good agreement with existing results in coding theory. Furthermore, we use our analysis to derive critical noise levels for channels with both memory and asymmetry in the noise. The resulting phase diagram shows that the combination of asymmetry and memory in the channel allows for high critical noise levels: in particular, we show that successful decoding is possible at any noise level of the bad channel when the good channel is good enough. Theoretical results at infinite block lengths using density evolution equations aree compared with average error probabilities calculated from a practical implementation of the corresponding decoding algorithms at finite block lengths. (paper)
Mimicking multichannel scattering with single-channel approaches
Grishkevich, Sergey; Schneider, Philipp-Immanuel; Vanne, Yulian V.; Saenz, Alejandro
2010-02-01
The collision of two atoms is an intrinsic multichannel (MC) problem, as becomes especially obvious in the presence of Feshbach resonances. Due to its complexity, however, single-channel (SC) approximations, which reproduce the long-range behavior of the open channel, are often applied in calculations. In this work the complete MC problem is solved numerically for the magnetic Feshbach resonances (MFRs) in collisions between generic ultracold Li6 and Rb87 atoms in the ground state and in the presence of a static magnetic field B. The obtained MC solutions are used to test various existing as well as presently developed SC approaches. It was found that many aspects even at short internuclear distances are qualitatively well reflected. This can be used to investigate molecular processes in the presence of an external trap or in many-body systems that can be feasibly treated only within the framework of the SC approximation. The applicability of various SC approximations is tested for a transition to the absolute vibrational ground state around an MFR. The conformance of the SC approaches is explained by the two-channel approximation for the MFR.
Mimicking multichannel scattering with single-channel approaches
International Nuclear Information System (INIS)
Grishkevich, Sergey; Schneider, Philipp-Immanuel; Vanne, Yulian V.; Saenz, Alejandro
2010-01-01
The collision of two atoms is an intrinsic multichannel (MC) problem, as becomes especially obvious in the presence of Feshbach resonances. Due to its complexity, however, single-channel (SC) approximations, which reproduce the long-range behavior of the open channel, are often applied in calculations. In this work the complete MC problem is solved numerically for the magnetic Feshbach resonances (MFRs) in collisions between generic ultracold 6 Li and 87 Rb atoms in the ground state and in the presence of a static magnetic field B. The obtained MC solutions are used to test various existing as well as presently developed SC approaches. It was found that many aspects even at short internuclear distances are qualitatively well reflected. This can be used to investigate molecular processes in the presence of an external trap or in many-body systems that can be feasibly treated only within the framework of the SC approximation. The applicability of various SC approximations is tested for a transition to the absolute vibrational ground state around an MFR. The conformance of the SC approaches is explained by the two-channel approximation for the MFR.
An approach to implement virtual channels for flowing magnetic beads
International Nuclear Information System (INIS)
Tang, Shih-Hao; Chiang, Hung-Wei; Hsieh, Min-Chien; Chang, Yen-Di; Yeh, Po-Fan; Tsai, Jui-che; Shieh, Wung-Yang
2014-01-01
This work demonstrates the feasibility of a novel microfluidic system with virtual channels formed by ‘walls’ of magnetic fields, including collecting channels, transporting channels and function channels. The channels are defined by the nickel patterns. With its own ferromagnetism, nickel can be magnetized using an external magnetic field; the nickel structures then generate magnetic fields that can either guide or trap magnetic beads. A glass substrate is sandwiched between the liquid containing magnetic beads and the chip with nickel structures, preventing the liquid from directly contacting the nickel. In this work, collecting channels, transporting channels and function channels are displayed sequentially. In the collecting channel portion, channels with different shapes are compared. Next, in the transporting channel portion we demonstrate I-, S- and Y-shaped channels can steer magnetic beads smoothly. Finally, in the function channel portion, a switchable trapping channel implemented with a bistable mechanism performs the passing and blocking of a magnetic bead. (paper)
DEFF Research Database (Denmark)
Cavalcante, Lucas Costa Pereira; Silveira, Luiz F. Q.; Rommel, Simon
2016-01-01
Millimeter wave communications based on photonic technologies have gained increased attention to provide optic fiber-like capacity in wireless environments. However, the new hybrid fiber-wireless channel represents new challenges in terms of signal transmission performance analysis. Traditionally......, such systems use diversity schemes in combination with digital signal processing (DSP) techniques to overcome effects such as fading and inter-symbol interference (ISI). Wavelet Channel Coding (WCC) has emerged as a technique to minimize the fading effects of wireless channels, which is a mayor challenge...... in systems operating in the millimeter wave regime. This work takes the WCC one step beyond by performance evaluation in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, the main international...
Directory of Open Access Journals (Sweden)
Buzzi Stefano
2006-01-01
Full Text Available The problem of joint channel estimation, equalization, and multiuser detection for a multiantenna DS/CDMA system operating over a frequency-selective fading channel and adopting long aperiodic spreading codes is considered in this paper. First of all, we present several channel estimation and multiuser data detection schemes suited for multiantenna long-code DS/CDMA systems. Then, a multipass strategy, wherein the data detection and the channel estimation procedures exchange information in a recursive fashion, is introduced and analyzed for the proposed scenario. Remarkably, this strategy provides, at the price of some attendant computational complexity increase, excellent performance even when very short training sequences are transmitted, and thus couples together the conflicting advantages of both trained and blind systems, that is, good performance and no wasted bandwidth, respectively. Space-time coded systems are also considered, and it is shown that the multipass strategy provides excellent results for such systems also. Likewise, it is also shown that excellent performance is achieved also when each user adopts the same spreading code for all of its transmit antennas. The validity of the proposed procedure is corroborated by both simulation results and analytical findings. In particular, it is shown that adopting the multipass strategy results in a remarkable reduction of the channel estimation mean-square error and of the optimal length of the training sequence.
International Nuclear Information System (INIS)
Thuy, N. N. Q.
2006-01-01
Inappropriately designed inter-channel and inter-system digital communications could initiate common cause failure of multiple channels or multiple systems. Defensive measures were introduced in EPRI report TR-1002835 (Guideline for Performing Defense-in-Depth and Diversity Assessments for Digital Upgrades) to assess, on a deterministic basis, the susceptibility of digital systems architectures to common-cause failures. This paper suggests how this approach could be applied to assess inter-channel and inter-system digital communications from a safety standpoint. The first step of the approach is to systematically identify the so called 'influence factors' that one end of the data communication path can have on the other. Potential factors to be considered would typically include data values, data volumes and data rates. The second step of the approach is to characterize the ways possible failures of a given end of the communication path could affect these influence factors (e.g., incorrect data values, excessive data rates, time-outs, incorrect data volumes). The third step is to analyze the designed-in measures taken to guarantee independence of the other end. In addition to classical error detection and correction codes, typical defensive measures are one-way data communication, fixed-rate data communication, fixed-volume data communication, validation of data values. (authors)
Jack, J.; Word, D.; Daniel, W.; Pritchard, S.; Parola, A.; Vesely, B.
2005-05-01
Streams have been heavily impacted by historical and contemporary management practices. Restorations are seen as a way to enhance stream ecosystem integrity, but there are few restoration sites where pre- and post-restoration data are available to assess "success." In 2003, a channelized reach of Wilson Creek (Kentucky, USA) was relocated using a natural channel design approach. We compared the structural and functional responses of the stream pre- and post restoration/relocation at sites within Wilson and two reference streams. Despite the construction disturbance, water chemistry parameters such as nitrate and turbidity were nearly identical at sampling stations above and below the relocation for 2003-2004. Macroinvertebrate colonization of the relocation sites was rapid, with communities dominated by Cheumatopsyche, Perlesta and Baetis. Assessments of CPOM transport indicated that the new stream channel is more retentive of leaf and woody debris material than the pre-restoration Wilson sites or unrestored reference stream sites. The restoration of suitable habitat and the presence of "source populations" for colonization may compensate for even large-scale (but short-term) construction disturbance. More research is needed to assess the balance between the disturbance impacts of restoration installation and the long term benefits of stream ecological improvement.
A Proposed Chaotic-Switched Turbo Coding Design and Its Application for Half-Duplex Relay Channel
Directory of Open Access Journals (Sweden)
Tamer H. M. Soliman
2015-01-01
Full Text Available Both reliability and security are two important subjects in modern digital communications, each with a variety of subdisciplines. In this paper we introduce a new proposed secure turbo coding system which combines chaotic dynamics and turbo coding reliability together. As we utilize the chaotic maps as a tool for hiding and securing the coding design in turbo coding system, this proposed system model can provide both data secrecy and data reliability in one process to combat problems in an insecure and unreliable data channel link. To support our research, we provide different schemes to design a chaotic secure reliable turbo coding system which we call chaotic-switched turbo coding schemes. In these schemes the design of turbo codes chaotically changed depending on one or more chaotic maps. Extensions of these chaotic-switched turbo coding schemes to half-duplex relay systems are also described. Results of simulations of these new secure turbo coding schemes are compared to classical turbo codes with the same coding parameters and the proposed system is able to achieve secured reasonable bit error rate performance when it is made to switch between different puncturing and design configuration parameters especially with low switching rates.
Benchmarking of computer codes and approaches for modeling exposure scenarios
International Nuclear Information System (INIS)
Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.
1994-08-01
The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided
International Nuclear Information System (INIS)
Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.
1988-01-01
The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-01
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed
Directory of Open Access Journals (Sweden)
Markku Renfors
2007-12-01
Full Text Available The ever-increasing public interest in location and positioning services has originated a demand for higher performance global navigation satellite systems (GNSSs. In order to achieve this incremental performance, the estimation of line-of-sight (LOS delay with high accuracy is a prerequisite for all GNSSs. The delay lock loops (DLLs and their enhanced variants (i.e., feedback code tracking loops are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. In addition, the new satellite positioning system proposals specify the use of a new modulation, the binary offset carrier (BOC modulation, which triggers a new challenge in the code tracking stage. Therefore, in order to meet this emerging challenge and to improve the accuracy of the delay estimation in severe multipath scenarios, this paper analyzes feedback as well as feedforward code tracking algorithms and proposes the peak tracking (PT methods, which are combinations of both feedback and feedforward structures and utilize the inherent advantages of both structures. We propose and analyze here two variants of PT algorithm: PT with second-order differentiation (Diff2, and PT with Teager Kaiser (TK operator, which will be denoted herein as PT(Diff2 and PT(TK, respectively. In addition to the proposal of the PT methods, the authors propose also an improved early-late-slope (IELS multipath elimination technique which is shown to provide very good mean-time-to-lose-lock (MTLL performance. An implementation of a noncoherent multipath estimating delay locked loop (MEDLL structure is also presented. We also incorporate here an extensive review of the existing feedback and feedforward delay estimation algorithms for direct sequence code division multiple access (DS-CDMA signals in satellite fading channels, by taking into account the impact of binary phase shift keying (BPSK as well as the newly proposed BOC modulation
Reliable quantum communication over a quantum relay channel
Energy Technology Data Exchange (ETDEWEB)
Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117, Hungary and Information Systems Research Group, Mathematics and Natural Sciences, Hungarian Ac (Hungary); Imre, Sandor [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117 (Hungary)
2014-12-04
We show that reliable quantum communication over an unreliable quantum relay channels is possible. The coding scheme combines the results on the superadditivity of quantum channels and the efficient quantum coding approaches.
On the performance of diagonal lattice space-time codes for the quasi-static MIMO channel
Abediseid, Walid
2013-06-01
There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple-output (MIMO) channel. All the coding design to date focuses on either high-performance, high rates, low complexity encoding and decoding, or targeting a combination of these criteria. In this paper, we analyze in detail the performance of diagonal lattice space-time codes under lattice decoding. We present both upper and lower bounds on the average error probability. We derive a new closed form expression of the lower bound using the so-called sphere-packing bound. This bound presents the ultimate performance limit a diagonal lattice space-time code can achieve at any signal-to-noise ratio (SNR). The upper bound is simply derived using the union-bound and demonstrates how the average error probability can be minimized by maximizing the minimum product distance of the code. © 2013 IEEE.
Directory of Open Access Journals (Sweden)
Crespo PedroM
2011-01-01
Full Text Available This paper focuses on the data fusion scenario where nodes sense and transmit the data generated by a source to a common destination, which estimates the original information from more accurately than in the case of a single sensor. This work joins the upsurge of research interest in this topic by addressing the setup where the sensed information is transmitted over a Gaussian Multiple-Access Channel (MAC. We use Low Density Generator Matrix (LDGM codes in order to keep the correlation between the transmitted codewords, which leads to an improved received Signal-to-Noise Ratio (SNR thanks to the constructive signal addition at the receiver front-end. At reception, we propose a joint decoder and estimator that exchanges soft information between the LDGM decoders and a data fusion stage. An error-correcting Bose, Ray-Chaudhuri, Hocquenghem (BCH code is further applied suppress the error floor derived from the ambiguity of the MAC channel when dealing with correlated sources. Simulation results are presented for several values of and diverse LDGM and BCH codes, based on which we conclude that the proposed scheme outperforms significantly (by up to 6.3 dB the suboptimum limit assuming separation between Slepian-Wolf source coding and capacity-achieving channel coding.
Abediseid, Walid
2012-12-21
The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.
1988-01-01
The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy
WSRC approach to validation of criticality safety computer codes
International Nuclear Information System (INIS)
Finch, D.R.; Mincey, J.F.
1991-01-01
Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K eff ) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope 236 U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed
DEFF Research Database (Denmark)
Barforooshan, Mohsen; Østergaard, Jan; Stavrou, Fotios
2017-01-01
This paper presents an upper bound on the minimum data rate required to achieve a prescribed closed-loop performance level in networked control systems (NCSs). The considered feedback loop includes a linear time-invariant (LTI) plant with single measurement output and single control input. Moreover......, in this NCS, a causal but otherwise unconstrained feedback system carries out zero-delay variable-rate coding, and control. Between the encoder and decoder, data is exchanged over a rate-limited noiseless digital channel with a known constant time delay. Here we propose a linear source-coding scheme...
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Simple approach to study biomolecule adsorption in polymeric microfluidic channels
International Nuclear Information System (INIS)
Gubala, Vladimir; Siegrist, Jonathan; Monaghan, Ruairi; O’Reilly, Brian; Gandhiraman, Ram Prasad; Daniels, Stephen; Williams, David E.; Ducrée, Jens
2013-01-01
Highlights: ► A simple tool to assess biomolecule adsorption onto the surfaces of microchannels. ► Development for dilution by surface-adsorption based depletion of protein samples. ► It can easily be done using a readily available apparatus like a spin-coater. ► The assessment tool is facile and quantitative. ► Straightforward comparison of different surface chemistries. - Abstract: Herein a simple analytical method is presented for the characterization of biomolecule adsorption on cyclo olefin polymer (COP, trade name: Zeonor ® ) substrates which are widely used in microfluidic lab-on-a-chip devices. These Zeonor ® substrates do not possess native functional groups for specific reactions with biomolecules. Therefore, depending on the application, such substrates must be functionalized by surface chemistry methods to either enhance or suppress biomolecular adsorption. This work demonstrates a microfluidic method for evaluating the adsorption of antibodies and oligonucleotides surfaces. The method uses centrifugal microfluidic flow-through chips and can easily be implemented using common equipment such as a spin coater. The working principle is very simple. The user adds 40 L of the solution containing the sample to the starting side of a microfluidic channel, where it is moved through by centrifugal force. Some molecules are adsorbed in the channel. The sample is then collected at the other end in a small reservoir and the biomolecule concentration is measured. As a pilot application, we characterized the adsorption of goat anti-human IgG and a 20-mer DNA on Zeonor ® , and on three types of functionalized Zeonor: 3-aminopropyltriethoxysilane (APTES) modified surface with mainly positive charge, negatively charged surface with immobilized bovine serum albumin (BSA), and neutral, hydrogel-like film with polyethylene glycol (PEG) characteristics. This simple analytical approach adds to the fundamental understanding of the interaction forces in real
Simple approach to study biomolecule adsorption in polymeric microfluidic channels
Energy Technology Data Exchange (ETDEWEB)
Gubala, Vladimir, E-mail: V.Gubala@kent.ac.uk [Biomedical Diagnostics Institute (BDI), National Centre for Sensor Research (NCSR), Dublin City University, Dublin 9 (Ireland); Medway School of Pharmacy, University of Kent, Central Avenue, Anson 120, Chatham Maritime, Kent ME4 4TB (United Kingdom); Siegrist, Jonathan; Monaghan, Ruairi; O' Reilly, Brian; Gandhiraman, Ram Prasad [Biomedical Diagnostics Institute (BDI), National Centre for Sensor Research (NCSR), Dublin City University, Dublin 9 (Ireland); Daniels, Stephen [Biomedical Diagnostics Institute (BDI), National Centre for Sensor Research (NCSR), Dublin City University, Dublin 9 (Ireland); National Centre for Plasma Science and Technology (NCPST), Dublin City University, Dublin 9 (Ireland); Williams, David E. [Biomedical Diagnostics Institute (BDI), National Centre for Sensor Research (NCSR), Dublin City University, Dublin 9 (Ireland); MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Chemical Sciences, University of Auckland, Auckland 1142 (New Zealand); Ducree, Jens [Biomedical Diagnostics Institute (BDI), National Centre for Sensor Research (NCSR), Dublin City University, Dublin 9 (Ireland)
2013-01-14
neutral, hydrogel-like film with polyethylene glycol (PEG) characteristics. This simple analytical approach adds to the fundamental understanding of the interaction forces in real, microfluidic systems. This method provides a straightforward and rapid way to screen surface compositions and chemistry, and relate these to their effects on the sensitivity and resistance to non-specific binding of bioassays using them. In an additional set of experiments, the surface area of the channels in this universal microfluidic chip was increased by precision milling of microscale trenches. This modified surface was then coated with APTES and tested for its potential to serve as a unique protein dilution feature.
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.
2016-01-01
, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet
An analytical model for perpetual network codes in packet erasure channels
DEFF Research Database (Denmark)
Pahlevani, Peyman; Crisostomo, Sergio; Roetter, Daniel Enrique Lucani
2016-01-01
is highly dependent on a parameter called the width (ωω), which represents the number of consecutive non-zero coding coefficient present in each coded packet after a pivot element. We provide a mathematical analysis based on the width of the coding vector for the number of transmitted packets and validate...
Optimal coding-decoding for systems controlled via a communication channel
Yi-wei, Feng; Guo, Ge
2013-12-01
In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.
Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang
2017-09-06
The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.
International Nuclear Information System (INIS)
Gomes, Renato G.; Rebello, Wilson F.; Vellozo, Sergio O.; Moreira Junior, Luis; Vital, Helio C.; Rusin, Tiago; Silva, Ademir X.
2013-01-01
In order to evaluate new lines of research in the area of irradiation of materials external to the research irradiating facility Army Technology Center (CTEx), it is necessary to study security parameters and magnitude of the dose rates from their channels of escape. The objective was to calculate, with the code MCNPX, dose rates (Gy / min) on the interior and exterior of the four-channel leakage gamma irradiator. The channels were designed to leak radiation on materials properly disposed in the area outside the irradiator larger than the expected volume of irradiation chambers (50 liters). This study aims to assess the magnitude of dose rates within the channels, as well as calculate the angle of beam output range outside the channel for analysis as to its spread, and evaluation of safe conditions of their operators (protection radiological). The computer simulation was performed by distributing virtual dosimeter ferrous sulfate (Fricke) in the longitudinal axis of the vertical drain channels (anterior and posterior) and horizontal (top and bottom). The results showed a collimating the beams irradiated on each of the channels to the outside, with values of the order of tenths of Gy / min as compared to the maximum amount of operation of the irradiator chamber (33 Gy / min). The external beam irradiation in two vertical channels showed a distribution shaped 'trunk pyramid', not collimated, so scattered, opening angle 83 ° in the longitudinal direction and 88 in the transverse direction. Thus, the cases allowed the evaluation of materials for irradiation outside the radiator in terms of the magnitude of the dose rates and positioning of materials, and still be able to take the necessary care in mounting shield for radiation protection by operators, avoiding exposure to ionizing radiation. (author)
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira
2016-07-28
One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.
Approaching the MIMO Capacity with a Low-Rate Feedback Channel in V-BLAST
Directory of Open Access Journals (Sweden)
Lozano Angel
2004-01-01
Full Text Available This paper presents an extension of the vertical Bell Laboratories Layered Space-Time (V-BLAST architecture in which the closed-loop multiple-input multiple-output (MIMO capacity can be approached with conventional scalar coding, optimum successive decoding (OSD, and independent rate assignments for each transmit antenna. This theoretical framework is used as a basis for the proposed algorithms whereby rate and power information for each transmit antenna is acquired via a low-rate feedback channel. We propose the successive quantization with power control (SQPC and successive rate and power quantization (SRPQ algorithms. In SQPC, rate quantization is performed with continuous power control. This performs better than simply quantizing the rates without power control. A more practical implementation of SQPC is SRPQ, in which both rate and power levels are quantized. The performance loss due to power quantization is insignificant when 45 bits are used per antenna. Both SQPC and SRPQ show an average total rate close to the closed-loop MIMO capacity if a capacity-approaching scalar code is used per antenna.
Efficient channel estimation in massive MIMO systems - a distributed approach
Al-Naffouri, Tareq Y.
2016-01-21
We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels
Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Patzold, Matthias Uwe; Balasingham, Ilangko
2018-01-01
-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the
International Nuclear Information System (INIS)
Chen, K.F.; Olson, C.A.
1983-01-01
One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction
National Research Council Canada - National Science Library
Ong, Choon
1998-01-01
The performance analysis of a differential phase shift keyed (DPSK) communications system, operating in a Rayleigh fading environment, employing convolutional coding and diversity processing is presented...
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-15
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer.
International Nuclear Information System (INIS)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-01
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer
TRANTHAC-1: transient thermal-hydraulic analysis code for HTGR core of multi-channel model
International Nuclear Information System (INIS)
Sato, Sadao; Miyamoto, Yoshiaki
1980-08-01
The computer program TRANTHAC-1 is for predicting thermal-hydraulic transient behavior in HTGR's core of pin-in-block type fuel elements, taking into consideration of the core flow distribution. The program treats a multi-channel model, each single channel representing the respective column composed of fuel elements. The fuel columns are grouped in flow control regions; each region is provided with an orifice assembly. In the region, all channels are of the same shape except one channel. Core heat is removed by downward flow of the control through the channel. In any transients, for given time-dependent power, total core flow, inlet coolant temperature and coolant pressure, the thermal response of the core can be determined. In the respective channels, the heat conduction in radial and axial direction are represented. And the temperature distribution in each channel with the components is calculated. The model and usage of the program are described. The program is written in FORTRAN-IV for computer FACOM 230-75 and it is composed of about 4,000 cards. The required core memory is about 75 kilowords. (author)
Channel estimation for space-time trellis coded-OFDM systems based on nonoverlapping pilot structure
CSIR Research Space (South Africa)
Sokoya, O
2008-09-01
Full Text Available . Through the analysis, two extreme conditions that produce the largest minimum determinant for a STTC-OFDM over multiple-tap channels were pointed out. The analysis show that the performance of the STTC-OFDM under various channel condition is based on...: 1) the minimum determinant tap delay of the channel and 2) the memory order of the STTC. New STTC-OFDM schemes were later designed in [2] taking into account some of the designed criteria shown in [1]. The STTC-OFDM schemes are capable...
Hazards of Colour Coding in Visual Approach Slope Indicators,
1981-12-01
the glideslope. The central spot (the ’ meatball ’) is displaced above or below the datum lights when the pilot views from above or below the...undershoot is increasing or decreasing, the step changes in intensity may also be evident as a form of flash coding. Colour coding of the ’ meatball " in
International Nuclear Information System (INIS)
Delagrange, H.
1977-01-01
This report is the user manual of the GR0GI-F code, modified version of the GR0GI-2 code. It calculates the cross sections for heavy ion induced fission. Fission probabilities are calculated via the Bohr-Wheeler formalism
Blind cooperative diversity using distributed space-time coding in block fading channels
Tourki, Kamel; Alouini, Mohamed-Slim; Deneire, Luc
2010-01-01
Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel
Low-Complexity Iterative Receiver for Space-Time Coded Signals over Frequency Selective Channels
Directory of Open Access Journals (Sweden)
Mohamed Siala
2002-05-01
Full Text Available We propose a low-complexity turbo-detector scheme for frequency selective multiple-input multiple-output channels. The detection part of the receiver is based on a List-type MAP equalizer which is a state-reduction algorithm of the MAP algorithm using per-survivor technique. This alternative achieves a good tradeoff between performance and complexity provided a small amount of the channel is neglected. In order to induce the good performance of this equalizer, we propose to use a whitened matched filter (WMF which leads to a white-noise Ã¢Â€Âœminimum phaseÃ¢Â€Â channel model. Simulation results show that the use of the WMF yields significant improvement, particularly over severe channels. Thanks to the iterative turbo processing (detection and decoding are iterated several times, the performance loss due to the use of the suboptimum List-type equalizer is recovered.
Directory of Open Access Journals (Sweden)
Omar M. Zakaria
2016-01-01
Full Text Available Multiradio wireless mesh network is a promising architecture that improves the network capacity by exploiting multiple radio channels concurrently. Channel assignment and routing are underlying challenges in multiradio architectures since both determine the traffic distribution over links and channels. The interdependency between channel assignments and routing promotes toward the joint solutions for efficient configurations. This paper presents an in-depth review of the joint approaches of channel assignment and routing in multiradio wireless mesh networks. First, the key design issues, modeling, and approaches are identified and discussed. Second, existing algorithms for joint channel assignment and routing are presented and classified based on the channel assignment types. Furthermore, the set of reconfiguration algorithms to adapt the network traffic dynamics is also discussed. Finally, the paper presents some multiradio practical implementations and test-beds and points out the future research directions.
International Nuclear Information System (INIS)
Bouzid, M.; Benkherouf, H.; Benzadi, K.
2011-01-01
In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Directory of Open Access Journals (Sweden)
Craig A. Doupnik
2015-01-01
Full Text Available Ion channels represent a large family of membrane proteins with many being well established targets in pharmacotherapy. The ‘druggability’ of heteromeric channels comprised of different subunits remains obscure, due largely to a lack of channel-specific probes necessary to delineate their therapeutic potential in vivo. Our initial studies reported here, investigated the family of inwardly rectifying potassium (Kir channels given the availability of high resolution crystal structures for the eukaryotic constitutively active Kir2.2 channel. We describe a ‘limited’ homology modeling approach that can yield chimeric Kir channels having an outer vestibule structure representing nearly any known vertebrate or invertebrate channel. These computationally-derived channel structures were tested in silico for ‘docking’ to NMR structures of tertiapin (TPN, a 21 amino acid peptide found in bee venom. TPN is a highly selective and potent blocker for the epithelial rat Kir1.1 channel, but does not block human or zebrafish Kir1.1 channel isoforms. Our Kir1.1 channel-TPN docking experiments recapitulated published in vitro findings for TPN-sensitive and TPN-insensitive channels. Additionally, in silico site-directed mutagenesis identified ‘hot spots’ within the channel outer vestibule that mediate energetically favorable docking scores and correlate with sites previously identified with in vitro thermodynamic mutant-cycle analysis. These ‘proof-of-principle’ results establish a framework for virtual screening of re-engineered peptide toxins for interactions with computationally derived Kir channels that currently lack channel-specific blockers. When coupled with electrophysiological validation, this virtual screening approach may accelerate the drug discovery process, and can be readily applied to other ion channels families where high resolution structures are available.
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
Directory of Open Access Journals (Sweden)
Stevan M. Berber
2014-06-01
Code Division Multiple Access (CDMA technique which allows communications of multiple users in the same communication system. This is achieved in such a way that each user is assigned a unique code sequence, which is used at the receiver side to discover the information dedicated to that user. These systems belong to the group of communication systems for direct sequence spread spectrum systems. Traditionally, CDMA systems use binary orthogonal spreading codes. In this paper, a mathematical model and simulation of a CDMA system based on the application of non-binary, precisely speaking, chaotic spreading sequences. In their nature, these sequences belong to random sequences with infinite periodicity, and due to that they are appropriate for applications in the systems that provide enhanced security against interception and secrecy in signal transmission. Numerous papers are dedicated to the development of CDMA systems in flat fading channels. This paper presents the results of these systems analysis for the case when frequency selective fading is present in the channel. In addition, the paper investigates a possibility of using interleaving techniques to mitigate fading in a wideband channel with the frequency selective fading. Basic structure of a CDMA communication system and its operation In this paper, a CDMA system block schematic is ppresented and the function of all blocks is explained. Notation to be used in the paper is introduced. Chaotic sequences are defined and explained in accordance with the method of their generation. A wideband channel with frequency selective fading is defined by its impulse response function. Theoretical analysis of a CDMA system with flat fading in a narrowband channel A narrowband channel and flat fading are defined. A mathematical analysis of the system is conducted by presenting the signal expressions at vital points in the transmitter and receiver. The expression of the signal at the output of the sequence correlator is
Chelli, Ali
2013-08-01
In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ). The relay overhears the transmitted messages over the different HARQ rounds and tries to decode the data packet. In case of successful decoding at the relay, both the relay and the source cooperate to transmit the message to the destination. The channel realizations are independent for different HARQ rounds. We assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions by the use of feedback. There are two major HARQ techniques, namely HARQ with incremental redundancy (IR) and HARQ with code combining (CC). We investigate the performance of HARQ-IR and HARQ-CC over a relay channel from an information theoretic perspective. Analytical expressions are derived for the information outage probability, the average number of transmissions, and the average transmission rate. We illustrate through our investigation the benefit of relaying. We also compare the performance of HARQ-IR and HARQ-CC and show that HARQ-IR outperforms HARQ-CC. © 2013 IEEE.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
Energy Technology Data Exchange (ETDEWEB)
Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)
2016-11-01
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
Polithematic Children’s Channel of television: an approach to a definition
Directory of Open Access Journals (Sweden)
Irene MELGAREJO MORENO
2014-10-01
Full Text Available The digitalization has created a new way to understand the television channels. There are several authors that have studied this theme –Cebrián (2004, Alcolea (2003, Bustamante (1999, etc. -,however, the approaches that have been made about thematic children’s channels are quite superfluous. This article carries out a revision of the existing theories about television, chilhood and thematic channels that provides a new terminology and formulates an approach to the Polithematic Children´s Channel definition, which shows the XXI Century television reality.
The DVB Channel Coding Application Using the DSP Development Board MDS TM-13 IREF
Directory of Open Access Journals (Sweden)
M. Slanina
2004-12-01
Full Text Available The paper deals with the implementation of the channel codingaccording to DVB standard on DSP development board MDS TM-13 IREF andPC. The board is based on Philips Nexperia media processor andintegrates hardware video ADC and DAC. The program libraries featuresused for MPEG based video compression are outlined and then thealgorithms of channel decoding (FEC protection against errors arepresented including the flowchart diagrams. The paper presents thepartial hardware implementation of the simulation system that coversselected phenomena of DVB baseband processing and it is used for realtime interactive demonstration of error protection influence ontransmitted digital video in laboratory and education.
WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code
DEFF Research Database (Denmark)
Lawall, Julia; Brunel, Julien Pierre Manuel; Palix, Nicolas Jean-Michel
2009-01-01
the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses...
Azadegan, B.
2013-03-01
The presented Mathematica code is an efficient tool for simulation of planar channeling radiation spectra of relativistic electrons channeled along major crystallographic planes of a diamond-structure single crystal. The program is based on the quantum theory of channeling radiation which has been successfully applied to study planar channeling at electron energies between 10 and 100 MeV. Continuum potentials for different planes of diamond, silicon and germanium single crystals are calculated using the Doyle-Turner approximation to the atomic scattering factor and taking thermal vibrations of the crystal atoms into account. Numerical methods are applied to solve the one-dimensional Schrödinger equation. The code is designed to calculate the electron wave functions, transverse electron states in the planar continuum potential, transition energies, line widths of channeling radiation and depth dependencies of the population of quantum states. Finally the spectral distribution of spontaneously emitted channeling radiation is obtained. The simulation of radiation spectra considerably facilitates the interpretation of experimental data. Catalog identifier: AEOH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 446 No. of bytes in distributed program, including test data, etc.: 209805 Distribution format: tar.gz Programming language: Mathematica. Computer: Platforms on which Mathematica is available. Operating system: Operating systems on which Mathematica is available. RAM: 1 MB Classification: 7.10. Nature of problem: Planar channeling radiation is emitted by relativistic charged particles during traversing a single crystal in direction parallel to a crystallographic plane. Channeling is modeled as the motion
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
Imagery and Verbal Coding Approaches in Chinese Vocabulary Instruction
Shen, Helen H.
2010-01-01
This study consists of two instructional experiments. Within the framework of dual coding theory, the study compares the learning effects of two instructional encoding methods used in Chinese vocabulary instruction among students learning beginning Chinese as a foreign language. One method uses verbal encoding only, and the other method uses…
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping
Á. Piedrafita (Álvaro); J.M. Renes (Joseph)
2017-01-01
textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve
GSM Channel Equalization Algorithm - Modern DSP Coprocessor Approach
Directory of Open Access Journals (Sweden)
M. Drutarovsky
1999-12-01
Full Text Available The paper presents basic equations of efficient GSM Viterbi equalizer algorithm based on approximation of GMSK modulation by linear superposition of amplitude modulated pulses. This approximation allows to use Ungerboeck form of channel equalizer with significantly reduced arithmetic complexity. Proposed algorithm can be effectively implemented on the Viterbi and Filter coprocessors of new Motorola DSP56305 digital signal processor. Short overview of coprocessor features related to the proposed algorithm is included.
Evaluation of coded aperture radiation detectors using a Bayesian approach
Energy Technology Data Exchange (ETDEWEB)
Miller, Kyle, E-mail: mille856@andrew.cmu.edu [Auton Lab, The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Huggins, Peter [Auton Lab, The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Labov, Simon; Nelson, Karl [Lawrence Livermore National Laboratory, Livermore, CA (United States); Dubrawski, Artur [Auton Lab, The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States)
2016-12-11
We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.
A Biologically Based Approach to the Mutation of Code
1999-09-01
instructions called a genome. This genome contains the master blueprint for all cellular structures and functions within the organism for the duration of...structures known as chromosomes, which are found in the nucleus of all non-somatic cells. Many procaryotic organisms have single-stranded DNA. An...coding sequence of a gene, or by an aberrant cellular recombination process. One way to reduce the chances of a harmful mutation occuring is to
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
The coupling algorithm between fuel pin and coolant channel in the European Accident Code EAC-2
International Nuclear Information System (INIS)
Goethem, G. van; Lassmann, K.
1989-01-01
In the field of fast breeder reactors the Commission of the European Communities (CEC) is conducting coordination and harmonisation activities as well as its own research at the CEC's Joint Research Centre (JRC). The development of the modular European Accident Code (EAC) is a typical example of concerted action between EC Member States performed under the leadership of the JRC. This computer code analyzes the initiation phase of low-probability whole-core accidents in LMFBRs with the aim of predicting the rapidity of sodium voiding, the mode of pin failure, the subsequent fuel redistribution and the associated energy release. This paper gives a short overview on the development of the EAC-2 code with emphasis on the coupling mechanism between the fuel behaviour module TRANSURANUS and the thermohydraulics modules which can be either CFEM or BLOW3A. These modules are also briefly described. In conclusion some numerical results of EAC-2 are given: they are recalculations of an unprotected LOF accident for the fictitious EUROPE fast breeder reactor which was earlier analysed in the frame of a comparative exercise performed in the early 80s and organised by the CEC. (orig.)
TRAN.1 - a code for transient analysis of temperature distribution in a nuclear fuel channel
International Nuclear Information System (INIS)
Bukhari, K.M.
1990-09-01
A computer program has been written in FORTRAN that solves the time dependent energy conservation equations in a nuclear fuel channel. As output from the program we obtained the temperature distribution in the fuel, cladding and coolant as a function of space and time. The stability criteria have also been developed. A set of finite difference equations for the steady state temperature distribution have also been incorporated in this program. A number of simplifications have been made in this version of the program. Thus at present, TRAN.1 uses constant thermodynamics properties and heat transfer coefficient at fuel cladding gap, has absence of phase change and pressure loss in the coolant, and there is no change in properties due to changes in burnup etc. These effects are now in the process of being included in the program. The current version of program should therefore be taken as a fuel channel, and this report should be considered as a status report on this program. (orig./A.B.)
Dirac potentials in a coupled channel approach to inelastic scattering
International Nuclear Information System (INIS)
Mishra, V.K.; Clark, B.C.; Cooper, E.D.; Mercer, R.L.
1990-01-01
It has been shown that there exist transformations that can be used to change the Lorentz transformation character of potentials, which appear in the Dirac equation for elastic scattering. We consider the situation for inelastic scattering described by coupled channel Dirac equations. We examine a two-level problem where both the ground and excited states are assumed to have zero spin. Even in this simple case we have not found an appropriate transformation. However, if the excited state has zero excitation energy it is possible to find a transformation
A Noisy-Channel Approach to Question Answering
2003-01-01
question “When did Elvis Presley die?” To do this, we build a noisy channel model that makes explicit how answer sentence parse trees are mapped into...in Figure 1, the algorithm above generates the following training example: Q: When did Elvis Presley die ? SA: Presley died PP PP in A_DATE, and...engine as a potential candidate for finding the answer to the question “When did Elvis Presley die?” In this case, we don’t know what the answer is
A generic coding approach for the examination of meal patterns.
Woolhead, Clara; Gibney, Michael J; Walsh, Marianne C; Brennan, Lorraine; Gibney, Eileen R
2015-08-01
Meal pattern analysis can be complex because of the large variability in meal consumption. The use of aggregated, generic meal data may address some of these issues. The objective was to develop a meal coding system and use it to explore meal patterns. Dietary data were used from the National Adult Nutrition Survey (2008-2010), which collected 4-d food diary information from 1500 healthy adults. Self-recorded meal types were listed for each food item. Common food group combinations were identified to generate a number of generic meals for each meal type: breakfast, light meals, main meals, snacks, and beverages. Mean nutritional compositions of the generic meals were determined and substituted into the data set to produce a generic meal data set. Statistical comparisons were performed against the original National Adult Nutrition Survey data. Principal component analysis was carried out by using these generic meals to identify meal patterns. A total of 21,948 individual meals were reduced to 63 generic meals. Good agreement was seen for nutritional comparisons (original compared with generic data sets mean ± SD), such as fat (75.7 ± 29.4 and 71.7 ± 12.9 g, respectively, P = 0.243) and protein (83.3 ± 26.9 and 80.1 ± 13.4 g, respectively, P = 0.525). Similarly, Bland-Altman plots demonstrated good agreement (<5% outside limits of agreement) for many nutrients, including protein, saturated fat, and polyunsaturated fat. Twelve meal types were identified from the principal component analysis ranging in meal-type inclusion/exclusion, varying in energy-dense meals, and differing in the constituents of the meals. A novel meal coding system was developed; dietary intake data were recoded by using generic meal consumption data. Analysis revealed that the generic meal coding system may be appropriate when examining nutrient intakes in the population. Furthermore, such a coding system was shown to be suitable for use in determining meal-based dietary patterns. © 2015
A Pragmatic Approach to the Application of the Code of Ethics in Nursing Education.
Tinnon, Elizabeth; Masters, Kathleen; Butts, Janie
The code of ethics for nurses was written for nurses in all settings. However, the language focuses primarily on the nurse in context of the patient relationship, which may make it difficult for nurse educators to internalize the code to inform practice. The purpose of this article is to explore the code of ethics, establish that it can be used to guide nurse educators' practice, and provide a pragmatic approach to application of the provisions.
International Nuclear Information System (INIS)
Xia, Yan; Song, He-Shan
2007-01-01
We present a controlled quantum secure direct communication protocol that uses a 2-dimensional Greenberger-Horne-Zeilinger (GHZ) entangled state and a 3-dimensional Bell-basis state and employs the high-dimensional quantum superdense coding, local collective unitary operations and entanglement swapping. The proposed protocol is secure and of high source capacity. It can effectively protect the communication against a destroying-travel-qubit-type attack. With this protocol, the information transmission is greatly increased. This protocol can also be modified, so that it can be used in a multi-party control system
Directory of Open Access Journals (Sweden)
F. Genc
2014-09-01
Full Text Available The purpose of this paper is to compare the turbo-coded Orthogonal Frequency Division Multiplexing (OFDM and turbo-coded Single Carrier Frequency Domain Equalization (SC-FDE systems under the effects of Carrier Frequency Offset (CFO, Symbol Timing Offset (STO and phase noise in wide-band Vogler-Hoffmeyer HF channel model. In mobile communication systems multi-path propagation occurs. Therefore channel estimation and equalization is additionally necessary. Furthermore a non-ideal local oscillator generally is misaligned with the operating frequency at the receiver. This causes carrier frequency offset. Hence in coded SC-FDE and coded OFDM systems; a very efficient, low complex frequency domain channel estimation and equalization is implemented in this paper. Also Cyclic Prefix (CP based synchronization synchronizes the clock and carrier frequency offset.The simulations show that non-ideal turbo-coded OFDM has better performance with greater diversity than non-ideal turbo-coded SC-FDE system in HF channel.
Arbitrariness is not enough: towards a functional approach to the genetic code.
Lacková, Ľudmila; Matlach, Vladimír; Faltýnek, Dan
2017-12-01
Arbitrariness in the genetic code is one of the main reasons for a linguistic approach to molecular biology: the genetic code is usually understood as an arbitrary relation between amino acids and nucleobases. However, from a semiotic point of view, arbitrariness should not be the only condition for definition of a code, consequently it is not completely correct to talk about "code" in this case. Yet we suppose that there exist a code in the process of protein synthesis, but on a higher level than the nucleic bases chains. Semiotically, a code should be always associated with a function and we propose to define the genetic code not only relationally (in basis of relation between nucleobases and amino acids) but also in terms of function (function of a protein as meaning of the code). Even if the functional definition of meaning in the genetic code has been discussed in the field of biosemiotics, its further implications have not been considered. In fact, if the function of a protein represents the meaning of the genetic code (the sign's object), then it is crucial to reconsider the notion of its expression (the sign) as well. In our contribution, we will show that the actual model of the genetic code is not the only possible and we will propose a more appropriate model from a semiotic point of view.
Are industry codes and standards a valid cost containment approach
International Nuclear Information System (INIS)
Rowley, C.W.; Simpson, G.T.; Young, R.K.
1990-01-01
The nuclear industry has historically concentrated on safety design features for many years, but recently has been shifting to the reliability of the operating systems and components. The Navy has already gone through this transition and has found that Reliability Centered Maintenance (RCM) is an invaluable tool to improve the reliability of components, systems, ships, and classes of ships. There is a close correlation of Navy ships and equipment to commercial nuclear power plants and equipment. The Navy has a central engineering and configuration management organization (Naval Sea Systems Command) for over 500 ships, where as the over 100 commercial nuclear power plants and 52 nuclear utilities represent a fragmented owner/management structure. This paper suggests that the results of the application of RCM in the Navy can be duplicated to a large degree in the commercial nuclear power industry by the development and utilization of nuclear codes and standards
Energy Technology Data Exchange (ETDEWEB)
Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu
2013-10-15
Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.
CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems
2018-04-19
AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386-16-1-4099 5c. PROGRAM ELEMENT...SUPPLEMENTARY NOTES 14. ABSTRACT Industrial Control Systems (ICS) used in manufacturing, power generators and other critical infrastructure monitoring and
Blind cooperative diversity using distributed space-time coding in block fading channels
Tourki, Kamel
2010-08-01
Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel is of an acceptable quality and we evaluate the usefulness of relaying when the source acts blindly and ignores the decision of the relays whether they may cooperate or not. In our study, we consider the regenerative relays in which the decisions to cooperate are based on a signal-to-noise ratio (SNR) threshold and consider the impact of the possible erroneously detected and transmitted data at the relays. We derive the end-to-end bit-error rate (BER) expression and its approximation for binary phase-shift keying modulation and look at two power allocation strategies between the source and the relays in order to minimize the end-to-end BER at the destination for high SNR. Some selected performance results show that computer simulations based results coincide well with our analytical results. © 2010 IEEE.
An approach for coupled-code multiphysics core simulations from a common input
International Nuclear Information System (INIS)
Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger; Clarno, Kevin; Simunovic, Srdjan; Slattery, Stuart; Turner, John; Palmtag, Scott
2015-01-01
Highlights: • We describe an approach for coupled-code multiphysics reactor core simulations. • The approach can enable tight coupling of distinct physics codes with a common input. • Multi-code multiphysics coupling and parallel data transfer issues are explained. • The common input approach and how the information is processed is described. • Capabilities are demonstrated on an eigenvalue and power distribution calculation. - Abstract: This paper describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak
Spectral Subtraction Approach for Interference Reduction of MIMO Channel Wireless Systems
Directory of Open Access Journals (Sweden)
Tomohiro Ono
2005-08-01
Full Text Available In this paper, a generalized spectral subtraction approach for reducing additive impulsive noise, narrowband signals, white Gaussian noise and DS-CDMA interferences in MIMO channel DS-CDMA wireless communication systems is investigated. The interference noise reduction or suppression is essential problem in wireless mobile communication systems to improve the quality of communication. The spectrum subtraction scheme is applied to the interference noise reduction problems for noisy MIMO channel systems. The interferences in space and time domain signals can effectively be suppressed by selecting threshold values, and the computational load with the FFT is not large. Further, the fading effects of channel are compensated by spectral modification with the spectral subtraction process. In the simulations, the effectiveness of the proposed methods for the MIMO channel DS-CDMA is shown to compare with the conventional MIMO channel DS-CDMA.
Bioethical and deontological approaches of the new occupational therapy code of ethics in Brazil
Directory of Open Access Journals (Sweden)
Leandro Correa Figueiredo
2017-03-01
Full Text Available Introduction: Currently, conflicts found in the health field bring new discussions on ethical and bioethical issues also in the Occupational Therapy domain. As noted in previous studies, the codes of professional ethics are not sufficient to face the current challenges daily experienced during professional practice. Objective: The present study aimed to find documental evidence of deontological and bioethical approaches in the new Brazilian code for Occupational Therapists through content analysis compared with the same analysis conducted in the preceding code. Method: Content analysis methods were applied to written documents to reveal deontological and bioethical approaches among textual fragments obtained from the new code of ethics. Results: The bioethical approaches found in the totality of the new code were increased in content and number (53.6% proportionally compared with those found in the former code. It seems that this increase was a result of the number of fragments classified in the justice-related category (22.6% - one of the most evident differences observed. Considering the ratio between the total number of fragments classified as professional autonomy and client autonomy in the new code - although the number of professional-related fragments have remained higher in comparison with client-related fragments - a significant decrease in the percentages of this ratio was detected. Conclusion: In conclusion, comparison between the codes revealed a bioethical embedding accompanied by a more client-centered practice, which reflects the way professionals have always conducted Occupational Therapy practice.
Sentence comprehension in aphasia: A noisy channel approach
Directory of Open Access Journals (Sweden)
Michael Walsh Dickey
2014-04-01
Full Text Available Probabilistic accounts of language understanding assume that comprehension involves determining the probability of an intended message (m given an input utterance (u (P(m|u; e.g. Gibson et al, 2013a; Levy et al, 2009. One challenge is that communication occurs within a noisy channel; i.e. the comprehender’s representation of u may have been distorted, e.g., by a typo or by impairment associated with aphasia. Bayes’ rule provides a model of how comprehenders can combine the prior probability of m (P(m with the probability that m would have been distorted to u (P(mu to calculate the probability of m given u (P(m|u P(mP(mu. This formalism can capture the observation that people with aphasia (PWA rely more on semantics than syntax during comprehension (e.g., Caramazza & Zurif, 1976: given the high probability that their representation of the input is unreliable, they weigh message likelihood more heavily. Gibson et al. (2013a showed that unimpaired adults are sensitive to P(m and P(mu: they more often chose interpretations that increased message plausibility or involved distortions requiring fewer changes, and/or deletions instead of insertions (see Figure 1a for examples. Gibson et al. (2013b found PWA were also sensitive to both P(m and P(mu in an act-out task, but relied more heavily than unimpaired controls on P(m. This shows group-level optimization towards the less noisy (semantic channel in PWA. The current experiment (8 PWA; 7 age-matched controls investigated noisy channel optimization at the level of individual PWA. It also included active/passive items with a weaker plausibility manipulation to test whether P(m is higher for implausible than impossible strings. The task was forced-choice sentence-picture matching (Figure 1b. Experimental sentences crossed active versus passive (A-P structures with plausibility (Set 1 or impossibility (Set 2, and prepositional-object versus double-object structures (PO-DO: Set 3 with
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis
Directory of Open Access Journals (Sweden)
Swapnil Mhaske
2017-01-01
Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
Directory of Open Access Journals (Sweden)
Weiqiang Pan
2015-03-01
Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing
2015-06-01
In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
A novel approach to correct the coded aperture misalignment for fast neutron imaging
Energy Technology Data Exchange (ETDEWEB)
Zhang, F. N.; Hu, H. S., E-mail: huasi-hu@mail.xjtu.edu.cn; Wang, D. M.; Jia, J. [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Zhang, T. K. [Laser Fusion Research Center, CAEP, Mianyang, 621900 Sichuan (China); Jia, Q. G. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China)
2015-12-15
Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.
Coded communications with nonideal interleaving
Laufer, Shaul
1991-02-01
Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.
Best estimate LB LOCA approach based on advanced thermal-hydraulic codes
International Nuclear Information System (INIS)
Sauvage, J.Y.; Gandrille, J.L.; Gaurrand, M.; Rochwerger, D.; Thibaudeau, J.; Viloteau, E.
2004-01-01
Improvements achieved in thermal-hydraulics with development of Best Estimate computer codes, have led number of Safety Authorities to preconize realistic analyses instead of conservative calculations. The potentiality of a Best Estimate approach for the analysis of LOCAs urged FRAMATOME to early enter into the development with CEA and EDF of the 2nd generation code CATHARE, then of a LBLOCA BE methodology with BWNT following the Code Scaling Applicability and Uncertainty (CSAU) proceeding. CATHARE and TRAC are the basic tools for LOCA studies which will be performed by FRAMATOME according to either a deterministic better estimate (dbe) methodology or a Statistical Best Estimate (SBE) methodology. (author)
A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding
Directory of Open Access Journals (Sweden)
Shailesh Kamble
2016-12-01
Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.
High-performance computational fluid dynamics: a custom-code approach
International Nuclear Information System (INIS)
Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain
2016-01-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)
High-performance computational fluid dynamics: a custom-code approach
Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.
2016-07-01
We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.
Directory of Open Access Journals (Sweden)
Wei-Ning Wu
2017-01-01
Full Text Available Many municipal governments adopted 311 decades ago and have advocated access equality in citizens’ use of 311. However, the role of citizens in the development and usage of 311 remains limited. Channel choices have been discussed in various types of governmental information and communication technologies (ICTs, especially when the innovative technology has just been adopted. Much has supported the idea that 311 is viewed as a method of digital civic engagement that many municipal governments adopt to maintain citizen relationship management and the capacity for government service delivery. However, we are still unclear about how citizens use it. This study applies the theory of channel expansion to examine how San Francisco residents use the 311 system, and how citizens’ technology experiences impact their 311 digital contact channel choices rather than the 311 hotline contact channel choice. In addition, we discuss major issues in citizens’ 311 contact choices, so that 311 municipal governments may draw lessons from the San Francisco experience.
Energy Technology Data Exchange (ETDEWEB)
Ud-Din Khan, Salah [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center; Peng, Minjun [Harbin Engineering Univ. (China). College of Nuclear Science and Technology; Yuntao, Song; Ud-Din Khan, Shahab [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; Haider, Sajjad [King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center
2017-02-15
The objective is to analyze the safety of small modular nuclear reactors of 220 MWe power. Reactivity initiated accidents (RIA) were investigated by neutron kinetic/thermal hydraulic (NK/TH) coupling approach and thermal hydraulic code i.e., RELAP5. The results obtained by these approaches were compared for validation and accuracy of simulation. In the NK/TH coupling technique, three codes (HELIOS, REMARK, THEATRe) were used. These codes calculate different parameters of the reactor core (fission power, reactivity, fuel temperature and inlet/outlet temperatures). The data exchanges between the codes were assessed by running the codes simultaneously. The results obtained from both (NK/TH coupling) and RELAP5 code analyses complement each other, hence confirming the accuracy of simulation.
International Nuclear Information System (INIS)
Azadegan, B.; Wagner, W.
2015-01-01
We present a Mathematica package for simulation of spectral-angular distributions and energy spectra of planar channeling radiation of relativistic electrons and positrons channeled along major crystallographic planes of a diamond-structure or tungsten single crystal. The program is based on the classical theory of channeling radiation which has been successfully applied to study planar channeling of light charged particles at energies higher than 100 MeV. Continuous potentials for different planes of diamond, Si, Ge and W single crystals are calculated using the Doyle–Turner approximation to the atomic scattering factor and taking thermal vibrations of the crystal atoms into account. Numerical methods are applied to solve the classical one-dimensional equation of motion. The code is designed to calculate the trajectories, velocities and accelerations of electrons (positrons) channeled by the planar continuous potential. In the framework of classical electrodynamics, these data allow realistic simulations of spectral-angular distributions and energy spectra of planar channeling radiation. Since the generated output is quantitative, the results of calculation may be useful, e.g., for setup configuration and crystal alignment in channeling experiments, for the study of the dependence of channeling radiation on the input parameters of particle beams with respect to the crystal orientation, but also for the simulation of positron production by means of pair creation what is mandatory for the design of efficient positron sources necessary in high-energy and collider physics. Although the classical theory of channeling is well established for long time, there is no adequate library program for simulation of channeling radiation up to now, which is commonly available, sufficiently simple and effective to employ and, therefore, of benefit as for special investigations as for a quick overview of basic features of this type of radiation
Chelli, Ali; Alouini, Mohamed-Slim
2013-01-01
assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions
Energy Technology Data Exchange (ETDEWEB)
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
1966-09-01
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
Geometrical Approach to the Grid System in the KOPEC Pilot Code
International Nuclear Information System (INIS)
Lee, E. J.; Park, C. E.; Lee, S. Y.
2008-01-01
KOPEC has been developing a pilot code to analyze two phase flow. The earlier version of the pilot code adopts the geometry with one-dimensional structured mesh system. As the pilot code is required to handle more complex geometries, a systematic geometrical approach to grid system has been introduced. Grid system can be classified as two types; structured grid system and unstructured grid system. The structured grid system is simple to apply but is less flexible than the other. The unstructured grid system is more complicated than the structured grid system. But it is more flexible to model the geometry. Therefore, two types of grid systems are utilized to allow code users simplicity as well as the flexibility
International Nuclear Information System (INIS)
Bilanovic, Z.; McCracken, D.R.
1994-12-01
In order to assess irradiation-induced corrosion effects, coolant radiolysis and the degradation of the physical properties of reactor materials and components, it is necessary to determine the neutron, photon, and electron energy deposition profiles in the fuel channels of the reactor core. At present, several different computer codes must be used to do this. The most recent, advanced and versatile of these is the latest version of MCNP, which may be capable of replacing all the others. Different codes have different assumptions and different restrictions on the way they can model the core physics and geometry. This report presents the results of ANISN and MCNP models of neutron and photon energy deposition. The results validate the use of MCNP for simplified geometrical modelling of energy deposition by neutrons and photons in the complex geometry of the CANDU reactor fuel channel. Discrete ordinates codes such as ANISN were the benchmark codes used in previous work. The results of calculations using various models are presented, and they show very good agreement for fast-neutron energy deposition. In the case of photon energy deposition, however, some modifications to the modelling procedures had to be incorporated. Problems with the use of reflective boundaries were solved by either including the eight surrounding fuel channels in the model, or using a boundary source at the bounding surface of the problem. Once these modifications were incorporated, consistent results between the computer codes were achieved. Historically, simple annular representations of the core were used, because of the difficulty of doing detailed modelling with older codes. It is demonstrated that modelling by MCNP, using more accurate and more detailed geometry, gives significantly different and improved results. (author). 9 refs., 12 tabs., 20 figs
Photo-Ionization of Noble Gases: A Demonstration of Hybrid Coupled Channels Approach
Directory of Open Access Journals (Sweden)
Vinay Pramod Majety
2015-01-01
Full Text Available We present here an application of the recently developed hybrid coupled channels approach to study photo-ionization of noble gas atoms: Neon and Argon. We first compute multi-photon ionization rates and cross-sections for these inert gas atoms with our approach and compare them with reliable data available from R-matrix Floquet theory. The good agreement between coupled channels and R-matrix Floquet theory show that our method treats multi-electron systems on par with the well established R-matrix theory. We then apply the time dependent surface flux (tSURFF method with our approach to compute total and angle resolved photo-electron spectra from Argon with linearly and circularly polarized 12 nm wavelength laser fields, a typical wavelength available from Free Electron Lasers (FELs.
Energy Technology Data Exchange (ETDEWEB)
Bartzis, J G; Megaritou, A; Belessiotis, V
1987-09-01
THEAP-I is a computer code developed in NRCPS `DEMOCRITUS` with the aim to contribute to the safety analysis of the open pool research reactors. THEAP-I is designed for three dimensional, transient thermal/hydraulic analysis of a thermally interacting channel bundle totally immersed into water or air, such as the reactor core. In the present report the mathematical and physical models and methods of the solution are given as well as the code description and the input data. A sample problem is also included, refering to the Greek Research Reactor analysis, under an hypothetical severe loss of coolant accident.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen
2018-05-25
Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection
Directory of Open Access Journals (Sweden)
Hongbo Zhao
2018-05-01
Full Text Available Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR, complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS and BeiDou Navigation Satellite System (BDS adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST. This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher
Rached, Nadhir B.
2016-01-06
The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed, boils down to computing the cumulative distribution function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gains (equivalently squared enveloped/ amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. We develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte Carlo simulations.
Rached, Nadhir B.
2015-06-14
The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when Equal Gain Combining (EGC) or Maximum Ratio Combining (MRC) diversity techniques are employed, boils down to computing the Cumulative Distribution Function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gain (equivalently squared enveloped/amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. In this paper, we develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte-Carlo simulations.
Drag reduction in a turbulent channel flow using a passivity-based approach
Heins, Peter; Jones, Bryn; Sharma, Atul
2013-11-01
A new active feedback control strategy for attenuating perturbation energy in a turbulent channel flow is presented. Using a passivity-based approach, a controller synthesis procedure has been devised which is capable of making the linear dynamics of a channel flow as close to passive as is possible given the limitations on sensing and actuation. A controller that is capable of making the linearized flow passive is guaranteed to globally stabilize the true flow. The resulting controller is capable of greatly restricting the amount of turbulent energy that the nonlinearity can feed back into the flow. DNS testing of a controller using wall-sensing of streamwise and spanwise shear stress and actuation via wall transpiration acting upon channel flows with Reτ = 100 - 250 showed significant reductions in skin-friction drag.
Residential building codes, affordability, and health protection: a risk-tradeoff approach.
Hammitt, J K; Belsky, E S; Levy, J I; Graham, J D
1999-12-01
Residential building codes intended to promote health and safety may produce unintended countervailing risks by adding to the cost of construction. Higher construction costs increase the price of new homes and may increase health and safety risks through "income" and "stock" effects. The income effect arises because households that purchase a new home have less income remaining for spending on other goods that contribute to health and safety. The stock effect arises because suppression of new-home construction leads to slower replacement of less safe housing units. These countervailing risks are not presently considered in code debates. We demonstrate the feasibility of estimating the approximate magnitude of countervailing risks by combining the income effect with three relatively well understood and significant home-health risks. We estimate that a code change that increases the nationwide cost of constructing and maintaining homes by $150 (0.1% of the average cost to build a single-family home) would induce offsetting risks yielding between 2 and 60 premature fatalities or, including morbidity effects, between 20 and 800 lost quality-adjusted life years (both discounted at 3%) each year the code provision remains in effect. To provide a net health benefit, the code change would need to reduce risk by at least this amount. Future research should refine these estimates, incorporate quantitative uncertainty analysis, and apply a full risk-tradeoff approach to real-world case studies of proposed code changes.
Construction of Capacity Achieving Lattice Gaussian Codes
Alghamdi, Wael
2016-04-01
We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].
A Novel Secure Transmission Scheme in MIMO Two-Way Relay Channels with Physical Layer Approach
Directory of Open Access Journals (Sweden)
Qiao Liu
2017-01-01
Full Text Available Security issue has been considered as one of the most pivotal aspects for the fifth-generation mobile network (5G due to the increasing demands of security service as well as the growing occurrence of security threat. In this paper, instead of focusing on the security architecture in the upper layer, we investigate the secure transmission for a basic channel model in a heterogeneous network, that is, two-way relay channels. By exploiting the properties of the transmission medium in the physical layer, we propose a novel secure scheme for the aforementioned channel mode. With precoding design, the proposed scheme is able to achieve a high transmission efficiency as well as security. Two different approaches have been introduced: information theoretical approach and physical layer encryption approach. We show that our scheme is secure under three different adversarial models: (1 untrusted relay attack model, (2 trusted relay with eavesdropper attack model, and (3 untrusted relay with eavesdroppers attack model. We also derive the secrecy capacity of the two different approaches under the three attacks. Finally, we conduct three simulations of our proposed scheme. The simulation results agree with the theoretical analysis illustrating that our proposed scheme could achieve a better performance than the existing schemes.
Islam, Muhammad F.; Islam, Mohammed N.
2012-04-01
The objective of this paper is to develop a novel approach for encryption and compression of biometric information utilizing orthogonal coding and steganography techniques. Multiple biometric signatures are encrypted individually using orthogonal codes and then multiplexed together to form a single image, which is then embedded in a cover image using the proposed steganography technique. The proposed technique employs three least significant bits for this purpose and a secret key is developed to choose one from among these bits to be replaced by the corresponding bit of the biometric image. The proposed technique offers secure transmission of multiple biometric signatures in an identification document which will be protected from unauthorized steganalysis attempt.
A Practical Approach to Improve Optical Channel Utilization Period for Hybrid FSO/RF Systems
Directory of Open Access Journals (Sweden)
Ahmet Akbulut
2014-01-01
Full Text Available In hybrid FSO/RF systems, mostly a hard switching mechanism is preferred in case of the FSO signal level falls below to the predefined threshold. In this work, a computationally simple approach is proposed to increase the utilization of the FSO channels bandwidth advantage. For the channel, clear air conditions have been supposed with the atmospheric turbulence. In this approach, FSO bit rate is adaptively changed to achieve desired BER performance. An IM/DD modulation, OOK (NRZ format has been used to show the benefit of the proposed method. Furthermore, to be more realistic with respect to the atmospheric turbulence variations within a day, some experimental observations have been followed up.
Conceptual Approach to Forming the Basic Code of Neo-Industrial Development of a Region
Directory of Open Access Journals (Sweden)
Elena Leonidovna Andreeva
2017-09-01
Full Text Available In the article, the authors propose the conceptual fundamentals of the “code approach” to the regional neo-industrial development. The purpose of the research is to reveal the essence of the transition to a new type of industrial and economic relations through a prism of “genetic codes” of the region. We consider these codes as a system of the “racial memory” of a territory, which determines the specificity and features of neo-industrialization realization. We substantiated the hypothesis about the influence of the “genetic codes” of the region on the effectiveness of the neo-industrialization. We have defined the participants, or else the carriers of the codes in the transformation of regional inheritance for the stimulation of the neoindustrial development of region’s economy. The subject matter of the research is the distinctive features of the functioning of the determinative region’s codes. Their content determines the socio-economic specificity of the region and the features of innovative, informational, value-based and competence-based development of the territory. The determinative codes generate the dynamic codes of the region, which are understood as their derivatives. They have a high probability of occurrence, higher speed of development and distribution, internal forces that make possible the self-development of the region. The scientific contribution is the substantiation of the basic code of the regional neo-industrial development. It represents the evolutionary accumulation of the rapid changes of its innovative, informational, value-based and competence-based codes stimulating the generation and implementation of new ideas regarding to economic entities adapted to the historical and cultural conditions. The article presents the code model of neo-industrial development of the region described by formulas. We applied the system analysis methods, historical and civilization approaches, evolutionary and
German nuclear codes revised: comparison with approaches used in other countries
International Nuclear Information System (INIS)
Raetzke, C.; Micklinghoff, M.
2005-01-01
The article deals with the plan of the German Federal Ministry for the Environment (BMU) to revise the German set of nuclear codes, and draws a comparison with approaches pursued in other countries in formulating and implementing new requirements imposed upon existing plants. A striking feature of the BMU project is the intention to have the codes reflect the state of the art in an entirely abstract way irrespective of existing plants. This implies new requirements imposed on plant design, among other things. However, the state authorities, which establish the licensing conditions for individual plants in concrete terms, will not be able to apply these new codes for legal reasons (protection of vested rights) to the extent in which they incorporate changes in safety philosophy. Also the procedure adopted has raised considerable concern. The processing time of two years is inordinately short, and participation of the public and of industry does not go beyond the strictly formal framework of general public participation. In the light of this absence of quality assurance, it would be surprising if this new set of codes did not suffer from considerable deficits in its contents. Other countries show that the BMU is embarking on an isolated approach in every respect. Elsewhere, backfitting requirements are developed carefully and over long periods of time; they are discussed in detail with the operators; costs and benefits are weighted, and the consequences are evaluated. These elements are in common to procedures in all countries, irrespective of very different steps in detail. (orig.)
Simplifying the parallelization of scientific codes by a function-centric approach in Python
International Nuclear Information System (INIS)
Nilsen, Jon K; Cai Xing; Langtangen, Hans Petter; Hoeyland, Bjoern
2010-01-01
The purpose of this paper is to show how existing scientific software can be parallelized using a separate thin layer of Python code where all parallelization-specific tasks are implemented. We provide specific examples of such a Python code layer, which can act as templates for parallelizing a wide set of serial scientific codes. The use of Python for parallelization is motivated by the fact that the language is well suited for reusing existing serial codes programmed in other languages. The extreme flexibility of Python with regard to handling functions makes it very easy to wrap up decomposed computational tasks of a serial scientific application as Python functions. Many parallelization-specific components can be implemented as generic Python functions, which may take as input those wrapped functions that perform concrete computational tasks. The overall programming effort needed by this parallelization approach is limited, and the resulting parallel Python scripts have a compact and clean structure. The usefulness of the parallelization approach is exemplified by three different classes of application in natural and social sciences.
Game Theoretical Approaches for Transport-Aware Channel Selection in Cognitive Radio Networks
Directory of Open Access Journals (Sweden)
Chen Shih-Ho
2010-01-01
Full Text Available Effectively sharing channels among secondary users (SUs is one of the greatest challenges in cognitive radio network (CRN. In the past, many studies have proposed channel selection schemes at the physical or the MAC layer that allow SUs swiftly respond to the spectrum states. However, they may not lead to enhance performance due to slow response of the transport layer flow control mechanism. This paper presents a cross-layer design framework called Transport Aware Channel Selection (TACS scheme to optimize the transport throughput based on states, such as RTT and congestion window size, of TCP flow control mechanism. We formulate the TACS problem as two different game theoretic approaches: Selfish Spectrum Sharing Game (SSSG and Cooperative Spectrum Sharing Game (CSSG and present novel distributed heuristic algorithms to optimize TCP throughput. Computer simulations show that SSSG and CSSG could double the SUs throughput of current MAC-based scheme when primary users (PUs use their channel infrequently, and with up to 12% to 100% throughput increase when PUs are more active. The simulation results also illustrated that CSSG performs up to 20% better than SSSG in terms of the throughput.
Filter multiplexing by use of spatial Code Division Multiple Access approach.
Solomon, Jonathan; Zalevsky, Zeev; Mendlovic, David; Monreal, Javier Garcia
2003-02-10
The increasing popularity of optical communication has also brought a demand for a broader bandwidth. The trend, naturally, was to implement methods from traditional electronic communication. One of the most effective traditional methods is Code Division Multiple Access. In this research, we suggest the use of this approach for spatial coding applied to images. The approach is to multiplex several filters into one plane while keeping their mutual orthogonality. It is shown that if the filters are limited by their bandwidth, the output of all the filters can be sampled in the original image resolution and fully recovered through an all-optical setup. The theoretical analysis of such a setup is verified in an experimental demonstration.
Analogies between colored Lévy noise and random channel approach to disordered kinetics
Vlad, Marcel O.; Velarde, Manuel G.; Ross, John
2004-02-01
We point out some interesting analogies between colored Lévy noise and the random channel approach to disordered kinetics. These analogies are due to the fact that the probability density of the Lévy noise source plays a similar role as the probability density of rate coefficients in disordered kinetics. Although the equations for the two approaches are not identical, the analogies can be used for deriving new, useful results for both problems. The random channel approach makes it possible to generalize the fractional Uhlenbeck-Ornstein processes (FUO) for space- and time-dependent colored noise. We describe the properties of colored noise in terms of characteristic functionals, which are evaluated by using a generalization of Huber's approach to complex relaxation [Phys. Rev. B 31, 6070 (1985)]. We start out by investigating the properties of symmetrical white noise and then define the Lévy colored noise in terms of a Langevin equation with a Lévy white noise source. We derive exact analytical expressions for the various characteristic functionals, which characterize the noise, and a functional fractional Fokker-Planck equation for the probability density functional of the noise at a given moment in time. Second, by making an analogy between the theory of colored noise and the random channel approach to disordered kinetics, we derive fractional equations for the evolution of the probability densities of the random rate coefficients in disordered kinetics. These equations serve as a basis for developing methods for the evaluation of the statistical properties of the random rate coefficients from experimental data. Special attention is paid to the analysis of systems for which the observed kinetic curves can be described by linear or nonlinear stretched exponential kinetics.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Directory of Open Access Journals (Sweden)
Xin Li
2014-06-01
Full Text Available Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians, especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.
International Nuclear Information System (INIS)
Heames, T.J.; Khatib-Rahbar, M.; Kelly, J.E.
1995-01-01
The hierarchy-by-interval (HBI) methodology was developed to determine an appropriate phenomena identification and ranking table for an independent peer review of severe-accident computer codes. The methodology is described, and the results of a specific code review are presented. Use of this systematic and structured approach ensures that important code models that need improvement are identified and prioritized, which allows code sponsors to more effectively direct limited resources in future code development. In addition, critical phenomenological areas that need more fundamental work, such as experimentation, are identified
A comparison of approaches for finding minimum identifying codes on graphs
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
FENICIA: a generic plasma simulation code using a flux-independent field-aligned coordinate approach
International Nuclear Information System (INIS)
Hariri, Farah
2013-01-01
The primary thrust of this work is the development and implementation of a new approach to the problem of field-aligned coordinates in magnetized plasma turbulence simulations called the FCI approach (Flux-Coordinate Independent). The method exploits the elongated nature of micro-instability driven turbulence which typically has perpendicular scales on the order of a few ion gyro-radii, and parallel scales on the order of the machine size. Mathematically speaking, it relies on local transformations that align a suitable coordinate to the magnetic field to allow efficient computation of the parallel derivative. However, it does not rely on flux coordinates, which permits discretizing any given field on a regular grid in the natural coordinates such as (x, y, z) in the cylindrical limit. The new method has a number of advantages over methods constructed starting from flux coordinates, allowing for more flexible coding in a variety of situations including X-point configurations. In light of these findings, a plasma simulation code FENICIA has been developed based on the FCI approach with the ability to tackle a wide class of physical models. The code has been verified on several 3D test models. The accuracy of the approach is tested in particular with respect to the question of spurious radial transport. Tests on 3D models of the drift wave propagation and of the Ion Temperature Gradient (ITG) instability in cylindrical geometry in the linear regime demonstrate again the high quality of the numerical method. Finally, the FCI approach is shown to be able to deal with an X-point configuration such as one with a magnetic island with good convergence and conservation properties. (author) [fr
Hickman, Ellie
2015-01-01
The manner in which customers shop is evolving and there has been an increase in customers shopping online and in physical shops using a multi-channel approach (Hsiao, Yen & Li, 2012). Customers now shop using mobile phones, tablets and have access to shopping sources 24 hours a day. Multi-channel shopping is where customers use multiple channels such as online, in-store, catalogues or mobile devices to purchase products or services (Zhang et al., 2010). Research has shown that multi-channel ...
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Directory of Open Access Journals (Sweden)
Itamar Iliuk
2016-01-01
Full Text Available Thermal-hydraulic analysis of plate-type fuel has great importance to the establishment of safety criteria, also to the licensing of the future nuclear reactor with the objective of propelling the Brazilian nuclear submarine. In this work, an analysis of a single plate-type fuel surrounding by two water channels was performed using the RELAP5 thermal-hydraulic code. To realize the simulations, a plate-type fuel with the meat of uranium dioxide sandwiched between two Zircaloy-4 plates was proposed. A partial loss of flow accident was simulated to show the behavior of the model under this type of accident. The results show that the critical heat flux was detected in the central region along the axial direction of the plate when the right water channel was blocked.
Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels
Directory of Open Access Journals (Sweden)
Yun Mao
2012-01-01
Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.
One way quantum repeaters with quantum Reed-Solomon codes
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-01-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of $d$-level systems for large dimension $d$. We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generation of quantum repeaters using quan...
NIMROD: A Customer Focused, Team Driven Approach for Fusion Code Development
Karandikar, H. M.; Schnack, D. D.
1996-11-01
NIMROD is a new code that will be used for the analysis of existing fusion experiments, prediction of operational limits, and design of future devices. An approach called Integrated Product Development (IPD) is being used for the development of NIMROD. It is a dramatic departure from existing practice in the fusion program. Code development is being done by a self-directed, multi-disciplinary, multi-institutional team that consists of experts in plasma theory, experiment, computational physics, and computer science. Customer representatives (ITER, US experiments) are an integral part of the team. The team is using techniques such as Quality Function Deployment (QFD), Pugh Concept Selection, Rapid Prototyping, and Risk Management, during the design phase of NIMROD. Extensive use is made of communication and internet technology to support collaborative work. Our experience with using these team techniques for such a complex software development project will be reported.
Itamar Iliuk; José Manoel Balthazar; Ângelo Marcelo Tusset; José Roberto Castilho Piqueira
2016-01-01
Thermal-hydraulic analysis of plate-type fuel has great importance to the establishment of safety criteria, also to the licensing of the future nuclear reactor with the objective of propelling the Brazilian nuclear submarine. In this work, an analysis of a single plate-type fuel surrounding by two water channels was performed using the RELAP5 thermal-hydraulic code. To realize the simulations, a plate-type fuel with the meat of uranium dioxide sandwiched between two Zircaloy-4 plates was prop...
Zhou, Xiaolin; Zheng, Xiaowei; Zhang, Rong; Hanzo, Lajos
2013-07-01
In this paper, we design a novel Poisson photon-counting based iterative successive interference cancellation (SIC) scheme for transmission over free-space optical (FSO) channels in the presence of both multiple access interference (MAI) as well as Gamma-Gamma atmospheric turbulence fading, shot-noise and background light. Our simulation results demonstrate that the proposed scheme exhibits a strong MAI suppression capability. Importantly, an order of magnitude of BER improvements may be achieved compared to the conventional chip-level optical code-division multiple-access (OCDMA) photon-counting detector.
Hermite-Pade approximation approach to hydromagnetic flows in convergent-divergent channels
International Nuclear Information System (INIS)
Makinde, O.D.
2005-10-01
The problem of two-dimensional, steady, nonlinear flow of an incompressible conducting viscous fluid in convergent-divergent channels under the influence of an externally applied homogeneous magnetic field is studied using a special type of Hermite-Pade approximation approach. This semi-numerical scheme offers some advantages over solutions obtained by using traditional methods such as finite differences, spectral method, shooting method, etc. It reveals the analytical structure of the solution function and the important properties of overall flow structure including velocity field, flow reversal control and bifurcations are discussed. (author)
A hazard-independent approach for the standardised multi-channel dissemination of warning messages
Esbri Palomares, M. A.; Hammitzsch, M.; Lendholt, M.
2012-04-01
The tsunami disaster affecting the Indian Ocean region on Christmas 2004 demonstrated very clearly the shortcomings in tsunami detection, public warning processes as well as intergovernmental warning message exchange in the Indian Ocean region. In that regard, early warning systems require that the dissemination of early warning messages has to be executed in way that ensures that the message delivery is timely; the message content is understandable, usable and accurate. To that end, diverse and multiple dissemination channels must be used to increase the chance of the messages reaching all affected persons in a hazard scenario. In addition to this, usage of internationally accepted standards for the warning dissemination such as the Common Alerting Protocol (CAP) and Emergency Data Exchange Language (EDXL) Distribution Element specified by the Organization for the Advancement of Structured Information Standards (OASIS) increase the interoperability among different warning systems enabling thus the concept of system-of-systems proposed by GEOSS. The project Distant Early Warning System (DEWS), co-funded by the European Commission under the 6th Framework Programme, aims at strengthening the early warning capacities by building an innovative generation of interoperable tsunami early warning systems based on the above mentioned concepts following a Service-oriented Architecture (SOA) approach. The project focuses on the downstream part of the hazard information processing where customized, user-tailored warning messages and alerts flow from the warning centre to the responsible authorities and/or the public with their different needs and responsibilities. The information logistics services within DEWS generate tailored EDXL-DE/CAP warning messages for each user that must receive the message according to their preferences, e.g., settings for language, interested areas, dissemination channels, etc.. However, the significant difference in the implementation and
Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions
Directory of Open Access Journals (Sweden)
Lei Ye
2009-01-01
Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.
Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions
Directory of Open Access Journals (Sweden)
Burr Alister
2009-01-01
Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.
Directory of Open Access Journals (Sweden)
Yun Lee
Full Text Available Lignin is a polymer in secondary cell walls of plants that is known to have negative impacts on forage digestibility, pulping efficiency, and sugar release from cellulosic biomass. While targeted modifications of different lignin biosynthetic enzymes have permitted the generation of transgenic plants with desirable traits, such as improved digestibility or reduced recalcitrance to saccharification, some of the engineered plants exhibit monomer compositions that are clearly at odds with the expected outcomes when the biosynthetic pathway is perturbed. In Medicago, such discrepancies were partly reconciled by the recent finding that certain biosynthetic enzymes may be spatially organized into two independent channels for the synthesis of guaiacyl (G and syringyl (S lignin monomers. Nevertheless, the mechanistic details, as well as the biological function of these interactions, remain unclear. To decipher the working principles of this and similar control mechanisms, we propose and employ here a novel computational approach that permits an expedient and exhaustive assessment of hundreds of minimal designs that could arise in vivo. Interestingly, this comparative analysis not only helps distinguish two most parsimonious mechanisms of crosstalk between the two channels by formulating a targeted and readily testable hypothesis, but also suggests that the G lignin-specific channel is more important for proper functioning than the S lignin-specific channel. While the proposed strategy of analysis in this article is tightly focused on lignin synthesis, it is likely to be of similar utility in extracting unbiased information in a variety of situations, where the spatial organization of molecular components is critical for coordinating the flow of cellular information, and where initially various control designs seem equally valid.
International Nuclear Information System (INIS)
Avramova, M.; Ivanov, K.; Arenas, C.
2013-01-01
The principles that support the risk-informed regulation are to be considered in an integrated decision-making process. Thus, any evaluation of licensing issues supported by a safety analysis would take into account both deterministic and probabilistic aspects of the problem. The deterministic aspects will be addressed using Best Estimate code calculations and considering the associated uncertainties i.e. Plus Uncertainty (BEPU) calculations. In recent years there has been an increasing demand from nuclear research, industry, safety and regulation for best estimate predictions to be provided with their confidence bounds. This applies also to the sub-channel thermal-hydraulic codes, which are used to evaluate local safety parameters. The paper discusses the extension of BEPU methods to the sub-channel thermal-hydraulic codes on the example of the Pennsylvania State University (PSU) version of COBRA-TF (CTF). The use of coupled codes supplemented with uncertainty analysis allows to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1000, EPR-1600 and etc.). The paper presents the application of Generalized Perturbation Theory (GPT) to generate uncertainties associated with the few-group assembly homogenized neutron cross-section data used as input in coupled reactor core calculations. This is followed by a discussion of uncertainty propagation methodologies, being implemented by PSU in cooperation of Technical University of Catalonia (UPC) for reactor core calculations and for comprehensive multi-physics simulations. (authors)
Highly selective BSA imprinted polyacrylamide hydrogels facilitated by a metal-coding MIP approach.
El-Sharif, H F; Yapati, H; Kalluru, S; Reddy, S M
2015-12-01
We report the fabrication of metal-coded molecularly imprinted polymers (MIPs) using hydrogel-based protein imprinting techniques. A Co(II) complex was prepared using (E)-2-((2 hydrazide-(4-vinylbenzyl)hydrazono)methyl)phenol; along with iron(III) chloroprotoporphyrin (Hemin), vinylferrocene (VFc), zinc(II) protoporphyrin (ZnPP) and protoporphyrin (PP), these complexes were introduced into the MIPs as co-monomers for metal-coding of non-metalloprotein imprints. Results indicate a 66% enhancement for bovine serum albumin (BSA) protein binding capacities (Q, mg/g) via metal-ion/ligand exchange properties within the metal-coded MIPs. Specifically, Co(II)-complex-based MIPs exhibited 92 ± 1% specific binding with Q values of 5.7 ± 0.45 mg BSA/g polymer and imprinting factors (IF) of 14.8 ± 1.9 (MIP/non-imprinted (NIP) control). The selectivity of our Co(II)-coded BSA MIPs were also tested using bovine haemoglobin (BHb), lysozyme (Lyz), and trypsin (Tryp). By evaluating imprinting factors (K), each of the latter proteins was found to have lower affinities in comparison to cognate BSA template. The hydrogels were further characterised by thermal analysis and differential scanning calorimetry (DSC) to assess optimum polymer composition. The development of hydrogel-based molecularly imprinted polymer (HydroMIPs) technology for the memory imprinting of proteins and for protein biosensor development presents many possibilities, including uses in bio-sample clean-up or selective extraction, replacement of biological antibodies in immunoassays and biosensors for medicine and the environment. Biosensors for proteins and viruses are currently expensive to develop because they require the use of expensive antibodies. Because of their biomimicry capabilities (and their potential to act as synthetic antibodies), HydroMIPs potentially offer a route to the development of new low-cost biosensors. Herein, a metal ion-mediated imprinting approach was employed to metal-code our
Energy Technology Data Exchange (ETDEWEB)
Jeong, J. J.; Chung, B. D.; Lee, W.J
2005-02-01
The subchannel analysis capability of the MARS 3D module has been improved. Especially, the turbulent mixing and void drift models for flow mixing phenomena in rod bundles have been assessed using some well-known rod bundle test data. Then, the subchannel analysis feature was combined to the existing coupled 'system Thermal-Hydraulics (T/H) and 3D reactor kinetics' calculation capability of MARS. These features allow the coupled 'system T/H, 3D reactor kinetics, and hot channel' analysis capability and, thus, realistic simulations of hot channel behavior as well as global system T/H behavior. In this report, the MARS code features for the coupled analysis capability are described first. The code modifications relevant to the features are also given. Then, a coupled analysis of the Main Steam Line Break (MSLB) is carried out for demonstration. The results of the coupled calculations are very reasonable and realistic, and show these methods can be used to reduce the over-conservatism in the conventional safety analysis.
Neben, Nicole; Lenarz, Thomas; Schuessler, Mark; Harpel, Theo; Buechner, Andreas
2013-05-01
Results for speech recognition in noise tests when using a new research coding strategy designed to introduce the virtual channel effect provided no advantage over MP3(000™). Although statistically significant smaller just noticeable differences (JNDs) were obtained, the findings for pitch ranking proved to have little clinical impact. The aim of this study was to explore whether modifications to MP3000 by including sequential virtual channel stimulation would lead to further improvements in hearing, particularly for speech recognition in background noise and in competing-talker conditions, and to compare results for pitch perception and melody recognition, as well as informally collect subjective impressions on strategy preference. Nine experienced cochlear implant subjects were recruited for the prospective study. Two variants of the experimental strategy were compared to MP3000. The study design was a single-blinded ABCCBA cross-over trial paradigm with 3 weeks of take-home experience for each user condition. Comparing results of pitch-ranking, a significantly reduced JND was identified. No significant effect of coding strategy on speech understanding in noise or competing-talker materials was found. Melody recognition skills were the same under all user conditions.
A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance
Bell, E. V.; Henry, A.; Pivo, G.
2017-12-01
What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their
Directory of Open Access Journals (Sweden)
Peter eVuust
2014-10-01
Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.
International Nuclear Information System (INIS)
Chaudri, Khurrum Saleem; Su Yali; Chen Ronghua; Tian Wenxi; Su Guanghui; Qiu Suizheng
2012-01-01
Highlights: ► A tool is developed for coupled neutronics/thermal-hydraulic analysis for SCWR. ► For thermal hydraulic analysis, a sub-channel code SACoS is developed and verified. ► Coupled analysis agree quite well with the reference calculations. ► Different choice of important parameters makes huge difference in design calculations. - Abstract: Supercritical Water Reactor (SCWR) is one of the promising reactors from the list of fourth generation of nuclear reactors. High thermal efficiency and low cost of electricity make it an attractive option in the era of growing energy demand. An almost seven fold density variation for coolant/moderator along the active height does not allow the use of constant density assumption for design calculations, as used for previous generations of reactors. The advancement in computer technology gives us the superior option of performing coupled analysis. Thermal hydraulics calculations of supercritical water systems present extra challenges as not many computational tools are available to perform that job. This paper introduces a new sub-channel code called Sub-channel Analysis Code of SCWR (SACoS) and its application in coupled analyses of High Performance Light Water Reactor (HPLWR). SACoS can compute the basic thermal hydraulic parameters needed for design studies of a supercritical water reactor. Multiple heat transfer and pressure drop correlations are incorporated in the code according to the flow regime. It has the additional capability of calculating the thermal hydraulic parameters of moderator flowing in water box and between fuel assemblies under co-current or counter current flow conditions. Using MCNP4c and SACoS, a coupled system has been developed for SCWR design analyses. The developed coupled system is verified by performing and comparing HPLWR calculations. The results were found to be in very good agreement. Significant difference between the results was seen when Doppler feedback effect was included in
Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes
Directory of Open Access Journals (Sweden)
BOT, A.
2013-11-01
Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.
Empirical Evaluation of Superposition Coded Multicasting for Scalable Video
Chun Pong Lau; Shihada, Basem; Pin-Han Ho
2013-01-01
In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However
Directory of Open Access Journals (Sweden)
Dan Tulpan
2013-01-01
Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
Alam, Tanvir
2016-11-28
Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine-learning model to distinguish promoters of long ncRNA (lncRNA) genes from those of protein-coding genes. This represents the first attempt to make this distinction based on properties of the associated gene promoters. From our analyses, several transcription factors (TFs), which are known to be regulated by lncRNAs, also emerged as potential global regulators of lncRNAs, suggesting that lncRNAs and TFs may participate in bidirectional feedback regulatory network. Our results also raise the possibility that, due to the historical dependence on protein-coding gene in defining the chromatin states of active promoters, an adjustment of these chromatin signature profiles to incorporate lncRNAs is warranted in the future. Secondly, we developed a novel method to infer functions for lncRNA and microRNA (miRNA) transcripts based on their transcriptional regulatory networks in 119 tissues and 177 primary cells of human. This method for the first time combines information of cell/tissueVspecific expression of a transcript and the TFs and transcription coVfactors (TcoFs) that control activation of that transcript. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues and associated knowledgebase (FARNA) is developed. FARNA, having the most comprehensive function annotation of considered ncRNAs across the widest spectrum of cells/tissues, has a potential to contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. Thirdly, we developed a novel machine-learning model to identify LD motif (a protein interaction motif) of paxillin, a ncRNA target that is involved in cell motility and cancer metastasis. Our recognition model identified new proteins not
Directory of Open Access Journals (Sweden)
Tinghua Zhang
2018-02-01
Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.
The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering
Walraven, J.C.; Bigaj-Van Vliet, A.
2011-01-01
The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new
Moderate Deviation Analysis for Classical Communication over Quantum Channels
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
Parton distribution function for quarks in an s-channel approach
Hautmann, F
2007-01-01
We use an s-channel picture of hard hadronic collisions to investigate the parton distribution function for quarks at small momentum fraction x, which corresponds to very high energy scattering. We study the renormalized quark distribution at one loop in this approach. In the high-energy picture, the quark distribution function is expressed in terms of a Wilson-line correlator that represents the cross section for a color dipole to scatter from the proton. We model this Wilson-line correlator in a saturation model. We relate this representation of the quark distribution function to the corresponding representation of the structure function F_T(x,Q^2) for deeply inelastic scattering.
Structured LDPC Codes over Integer Residue Rings
Directory of Open Access Journals (Sweden)
Marc A. Armand
2008-07-01
Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over Ã¢Â„Â¤2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.
Structured LDPC Codes over Integer Residue Rings
Directory of Open Access Journals (Sweden)
Mo Elisa
2008-01-01
Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.
A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity
Directory of Open Access Journals (Sweden)
Maoshen Jia
2017-12-01
Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.
Emura, Fabian; Gralnek, Ian; Baron, Todd H
2013-01-01
Despite extensive worldwide use of standard esophagogastroduodenoscopy (EGD) examinations, gastric cancer (GC) is one of the most common forms of cancer and ranks as the most common malignant tumor in East Asia, Eastern Europe and parts of Latin America. Current limitations of using non systematic examination during standard EGD could be at least partially responsible for the low incidence of early GC diagnosis in countries with a high prevalence of the disease. Originally proposed by Emura et al., systematic alphanumeric-coded endoscopy (SACE) is a novel method that facilitates complete examination of the upper GI tract based on sequential systematic overlapping photo-documentation using an endoluminal alphanumeric-coded nomenclature comprised of eight regions and 28 areas covering the entire surface upper GI surface. For precise localization or normal or abnormal areas, SACE incorporates a simple coordinate system based on the identification of certain natural axes, walls, curvatures and anatomical endoluminal landmarks. Efectiveness of SACE was recently demonstrated in a screening study that diagnosed early GC at a frequency of 0.30% (2/650) in healthy, average-risk volunteer subjects. Such a novel approach, if uniformly implemented worldwide, could significantly change the way we practice upper endoscopy in our lifetimes.
A marketing-finance approach linking contracts in agricultural channels to shareholder value
Pennings, J.M.E.; Wansink, B.; Hoffmann, A.O.I.
2011-01-01
A conceptual marketing-finance framework is proposed which links channel contracting in agriculture and the use of financial facilitating services (e.g., financial derivatives) to (shareholder) value creation. The framework complements existing literature by explicitly including channel contract
LHC-GCS a model-driven approach for automatic PLC and SCADA code generation
Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques
2005-01-01
The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.
Modified linear predictive coding approach for moving target tracking by Doppler radar
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
Coded Network Function Virtualization
DEFF Research Database (Denmark)
Al-Shuwaili, A.; Simone, O.; Kliewer, J.
2016-01-01
Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...
Morbi, Zulfikar; Ho, D. B.; Ren, H.-W.; Le, Han Q.; Pei, Shin Shem
2002-09-01
Demonstration of short-range multispectral remote sensing, using 3 to 4-micrometers mid- infrared Sb semiconductor lasers based on code-division multiplexing (CDM) architecture, is described. The system is built on a principle similar to intensity- modulated/direct-detection optical-CDMA for communications, but adapted for sensing with synchronous, orthogonal codes to distinguish different wavelength channels with zero interchannel correlation. The concept is scalable for any number of channels, and experiments with a two-wavelength system are conducted. The CDM-signal processing yielded a white-Gaussian-like system noise that is found to be near the theoretical level limited by the detector fundamental intrinsic noise. With sub-mW transmitter average power, the system was able to detect an open-air acetylene gas leak of 10-2 STP ft3/hr from 10-m away with time-varying, random, noncooperative backscatters. A similar experiment detected and positively distinguished hydrocarbon oil contaminants on water from bio-organic oils and detergents. Projection for more advanced systems suggests a multi-kilometer-range capability for watt-level transmitters, and hundreds of wavelength channels can also be accommodated for active hyperspectral remote sensing application.
Amevor, Peter Kwame
2010-01-01
The main thrust of this work has been to explore the norms of the 1983 new Code of Canon Law on new approaches to pastoral care for marriage preparation and certain mechanisms of marriage in the traditional African society as motivation for assistance in the formulation of appropriate guidelines and programmes for marriage preparation in the Ho diocese. The study realised that the old Code (1917) became outmoded because in the face of numerous problems confronting the marriage institution the...
Sinclair, Kristofer D.
2009-12-01
Ruptures of the anterior cruciate ligament (ACL) are the most frequent of injuries to the knee due to its role in preventing anterior translation of the tibia. It is estimated that as many as 200,000 Americans per year will suffer from a ruptured ACL, resulting in management costs on the order of 5 billion dollars. Without treatment these patients are unable to return to normal activity, as a consequence of the joint instability found within the ACL deficient knee. Over the last thirty years, a variety of non-degradable, synthetic fibers have been evaluated for their use in ACL reconstruction; however, a widely accepted prosthesis has been unattainable due to differences in mechanical properties of the synthetic graft relative to the native tissue. Tissue engineering is an interdisciplinary field charged with the task of developing therapeutic solutions for tissue and organ failure by enhancing the natural wound healing process through the use of cellular transplants, biomaterials, and the delivery of bioactive molecules. The capillary channel polymer (CC-P) fibers used in this research were fabricated by melt extrusion from polyethylene terephthalate and polybutylene terephthalate. These fibers possess aligned micrometer scale surface channels that may serve as physical templates for tissue growth and regeneration. This inherent surface topography offers a unique and industrially viable approach for cellular contact guidance on three dimensional constructs. In this fundamental research the ability of these fiber channels to support the adhesion, alignment, and organization of fibroblasts was demonstrated and found to be superior to round fiber controls. The results demonstrated greater uniformity of seeding and accelerated formation of multi-layered three-dimensional biomass for the CC-P fibers relative to those with a circular cross-section. Furthermore, the CC-P geometry induced nuclear elongation consistent with that observed in native ACL tissue. Through the
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2011-09-01
Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.
Lam, Raymond; Kruger, Estie; Tennant, Marc
2014-12-01
One disadvantage of the remarkable achievements in dentistry is that treatment options have never been more varied or confusing. This has made the concept of Evidenced Based Dentistry more applicable to modern dental practice. Despite merit in the concept whereby clinical decisions are guided by scientific evidence, there are problems with establishing a scientific base. This is no more challenging than in modern dentistry where the gap between rapidly developing products/procedures and its evidence base are widening. Furthermore, the burden of oral disease continues to remain high at the population level. These problems have prompted new approaches to enhancing research. The aim of this paper is to outline how a modified approach to dental coding may benefit clinical and population level research. Using publically assessable data obtained from the Australian Chronic Disease Dental Scheme and item codes contained within the Australian Schedule of Dental Services and Glossary, a suggested approach to dental informatics is illustrated. A selection of item codes have been selected and expanded with the addition of suffixes. These suffixes provided circumstantial information that will assist in assessing clinical outcomes such as success rates and prognosis. The use of item codes in administering the CDDS yielded a large database of item codes. These codes are amenable to dental informatics which has been shown to enhance research at both the clinical and population level. This is a cost effective method to supplement existing research methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Cui, Laizhong; Lu, Nan; Chen, Fu
2014-01-01
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968
Alam, Tanvir
2016-01-01
Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine
International Nuclear Information System (INIS)
Marino, Edgardo J.L.
1999-01-01
Using the input data language of ICARE2 V2 Mod.3 code, the fuel element and coolant channel assembly of CNA I type was described. This input data was utilized to analyze the system behavior and determine the degradation produced during a hypothetical accidental transient at CNA I. The boundary conditions were determined through a previous calculation with RELAP5/MOD 3.2 code. The results had shown characteristic degradation phenomena's. The temperature of bundle components increases fast after 6.11 h in the first case and 5.28 h in the second case, due to the energy release by cladding oxidation. It was correlated with instantaneous hydrogen production and energy contribution. The cumulated hydrogen production was estimated as 0.15 Kg in the first case and ∼ 5 times greater in the second case. Fission product release from the gap due to cladding rupture took place from 6.25 h in the first case and 5.65 h in the second. Relocation started after 6.81 h in the first case and 5.68 in the second, because the cladding dislocation condition is reached. UO 2 dissolution by molten Zircaloy was observed at different levels in the calculation domain. (author)
A nuclear reload optimization approach using a real coded genetic algorithm with random keys
International Nuclear Information System (INIS)
Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.
2009-01-01
The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison
An approach to improving the structure of error-handling code in the linux kernel
DEFF Research Database (Denmark)
Saha, Suman; Lawall, Julia; Muller, Gilles
2011-01-01
The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Bin, Yang; De Cheng, Wang; Wei, Wang Zong; Hui, Li
2017-08-01
This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach.In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups.All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P gap approach group than in the median approach group (P gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48.The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained.
International Nuclear Information System (INIS)
Kornienko, Y.
2000-01-01
The purpose has been to describe an approach suggested for constructing generalized closure relationships for local and subchannel wall friction, heat and mass transfer coefficients, with not only axial and transversal parameters taken into account, but azimuthal substance transfer effects as well. These constitutive relations that are primary for description of one- and two-phase one-dimensional flow models can be derived from the initial 3-D drift flux formulation. The approach is based on the Reynolds flux, boundary layer and generalized coefficient of substance transfer. One more task has been to illustrate the validity of the 'conformity principle' for the limiting cases. The method proposed is based on the similarity theory, boundary layer model, and a phenomenological description of the regularities of the substance transfer (momentum, heat, and mass), as well as on an adequate simulation of the forms of flow structure by a generalized approach to build (an integrated in form and semi-empirical in maintenance structure) analytical relationships for wall friction, heat and mass transfer coefficients. (author)
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
Energy Technology Data Exchange (ETDEWEB)
NONE
2014-12-15
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin
2017-10-06
Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.
International Nuclear Information System (INIS)
Chun, Moon-Hyun; Jeong, Eun-Soo
1983-01-01
A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUEN1D, and the PWR-FLECHT data for various conditions. These show favourable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (∼413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than ∼4cm/sec) and low pressure (∼138Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work
International Nuclear Information System (INIS)
Chun, M.-H.; Jeong, E.-S.
1983-01-01
A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUENID, and the PWR-FLECHT data for various conditions. These show favorable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (approx. =413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than approx. =4cm/sec) and low pressure (approx. =138 Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work
Training sequence design for MIMO channels : An application-oriented approach
Katselis, D.; Rojas, C.R.; Bengtsson, M.; Bjornson, E.; Bombois, X.; Shariati, N.; Jansson, M.; Hjalmarsson, H.
2013-01-01
In this paper, the problem of training optimization for estimating a multiple-input multiple-output (MIMO) flat fading channel in the presence of spatially and temporally correlated Gaussian noise is studied in an application-oriented setup. So far, the problem of MIMO channel estimation has mostly
Pennings, J.M.E.
2004-01-01
Channel contract relations are dynamic. In this paper, it is argued that one of the drivers for this dynamism is a firm's strive for shareholder value. Using channel contract relationships as market-based assets, firms are managing a portfolio of spot and forward contract relationships. By
A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange
Douik, Ahmed S.
2014-05-11
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
A Game Theoretic Approach to Minimize the Completion Time of Network Coded Cooperative Data Exchange
Douik, Ahmed S.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim; Sorour, Sameh; Tembine, Hamidou
2014-01-01
In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperative data exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theory is employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session by self-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Pareto optimal solution. Through extensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show that our formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Design LDPC Codes without Cycles of Length 4 and 6
Directory of Open Access Journals (Sweden)
Kiseon Kim
2008-04-01
Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.
Geigle, Bryce A.
2014-01-01
The aim of this thesis is to investigate and present the status of student synthesis with color coded formula writing for grade level six through twelve, and to make recommendations for educators to teach writing structure through a color coded formula system in order to increase classroom engagement and lower students' affect. The thesis first…
An approach to verification and validation of MHD codes for fusion applications
Energy Technology Data Exchange (ETDEWEB)
Smolentsev, S., E-mail: sergey@fusion.ucla.edu [University of California, Los Angeles (United States); Badia, S. [Centre Internacional de Mètodes Numèrics en Enginyeria, Barcelona (Spain); Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Bhattacharyay, R. [Institute for Plasma Research, Gandhinagar, Gujarat (India); Bühler, L. [Karlsruhe Institute of Technology (Germany); Chen, L. [University of Chinese Academy of Sciences, Beijing (China); Huang, Q. [Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui (China); Jin, H.-G. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Krasnov, D. [Technische Universität Ilmenau (Germany); Lee, D.-W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Mas de les Valls, E. [Centre Internacional de Mètodes Numèrics en Enginyeria, Barcelona (Spain); Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Mistrangelo, C. [Karlsruhe Institute of Technology (Germany); Munipalli, R. [HyPerComp, Westlake Village (United States); Ni, M.-J. [University of Chinese Academy of Sciences, Beijing (China); Pashkevich, D. [St. Petersburg State Polytechnical University (Russian Federation); Patel, A. [Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Pulugundla, G. [University of California, Los Angeles (United States); Satyamurthy, P. [Bhabha Atomic Research Center (India); Snegirev, A. [St. Petersburg State Polytechnical University (Russian Federation); Sviridov, V. [Moscow Power Engineering Institute (Russian Federation); Swain, P. [Bhabha Atomic Research Center (India); and others
2015-11-15
Highlights: • Review of status of MHD codes for fusion applications. • Selection of five benchmark problems. • Guidance for verification and validation of MHD codes for fusion applications. - Abstract: We propose a new activity on verification and validation (V&V) of MHD codes presently employed by the fusion community as a predictive capability tool for liquid metal cooling applications, such as liquid metal blankets. The important steps in the development of MHD codes starting from the 1970s are outlined first and then basic MHD codes, which are currently in use by designers of liquid breeder blankets, are reviewed. A benchmark database of five problems has been proposed to cover a wide range of MHD flows from laminar fully developed to turbulent flows, which are of interest for fusion applications: (A) 2D fully developed laminar steady MHD flow, (B) 3D laminar, steady developing MHD flow in a non-uniform magnetic field, (C) quasi-two-dimensional MHD turbulent flow, (D) 3D turbulent MHD flow, and (E) MHD flow with heat transfer (buoyant convection). Finally, we introduce important details of the proposed activities, such as basic V&V rules and schedule. The main goal of the present paper is to help in establishing an efficient V&V framework and to initiate benchmarking among interested parties. The comparison results computed by the codes against analytical solutions and trusted experimental and numerical data as well as code-to-code comparisons will be presented and analyzed in companion paper/papers.
International Nuclear Information System (INIS)
Palmiotti, G.; Salvatores, M.; Aliberti, G.
2007-01-01
The validation of advanced simulation tools will still play a very significant role in several areas of reactor system analysis. This is the case of reactor physics and neutronics, where nuclear data uncertainties still play a crucial role for many core and fuel cycle parameters. The present paper gives a summary of validation motivations, objectives and approach. A validation effort is in particular necessary in the frame of advanced (e.g. Generation-IV or GNEP) reactors and associated fuel cycles assessment and design. Validation of simulation codes is complementary to the 'verification' process. In fact, 'verification' addresses the question 'are we solving the equations correctly' while validation addresses the question 'are we solving the correct equations with the correct parameters'. Verification implies comparisons with 'reference' equation solutions or with analytical solutions, when they exist. Most of what is called 'numerical validation' falls in this category. Validation strategies differ according to the relative weight of the methods and of the parameters that enter into the simulation tools. Most validation is based on experiments, and the field of neutronics where a 'robust' physics description model exists and which is function of 'input' parameters not fully known, will be the focus of this paper. In fact, in the case of reactor core, shielding and fuel cycle physics the model (theory) is well established (the Boltzmann and Bateman equations) and the parameters are the nuclear cross-sections, decay data etc. Two types of validation approaches can and have been used: (a) Mock-up experiments ('global' validation): need for a very close experimental simulation of a reference configuration. Bias factors cannot be extrapolated beyond reference configuration; (b) Use of 'clean', 'representative' integral experiments ('bias factor and adjustment' method). Allows to define bias factors, uncertainties and can be used for a wide range of applications. It
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Energy Technology Data Exchange (ETDEWEB)
Afana, A.; Barrio, G. del
2009-07-01
Delineation of drainage networks is an essential task in hydrological and geomorphologic analysis. Manual channel definition depends on topographic contrast and is highly subjective, leading to important errors at high resolutions. different automatic methods have proposed the use of a constant threshold of up sole contributing are to define channel initiation. Actually, these are the most commonly used for the automatic-channel network extraction from Digital Models (DEMs). However, these methods fall to detect and appropriate threshold when the basin is made up to heterogeneous sub-zones, as they only work either lumped or locally. In this study, the critical threshold area for channel delineation has been defined through the analysis of dominant geometric and topologic properties of stream network formation. (Author) 5 refs.
Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2016-01-01
The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed
Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes
Directory of Open Access Journals (Sweden)
Cocco Giuseppe
2010-01-01
Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.
Restructuring of burnup sensitivity analysis code system by using an object-oriented design approach
International Nuclear Information System (INIS)
Kenji, Yokoyama; Makoto, Ishikawa; Masahiro, Tatsumi; Hideaki, Hyoudou
2005-01-01
A new burnup sensitivity analysis code system was developed with help from the object-oriented technique and written in Python language. It was confirmed that they are powerful to support complex numerical calculation procedure such as reactor burnup sensitivity analysis. The new burnup sensitivity analysis code system PSAGEP was restructured from a complicated old code system and reborn as a user-friendly code system which can calculate the sensitivity coefficients of the nuclear characteristics considering multicycle burnup effect based on the generalized perturbation theory (GPT). A new encapsulation framework for conventional codes written in Fortran was developed. This framework supported to restructure the software architecture of the old code system by hiding implementation details and allowed users of the new code system to easily calculate the burnup sensitivity coefficients. The framework can be applied to the other development projects since it is carefully designed to be independent from PSAGEP. Numerical results of the burnup sensitivity coefficient of a typical fast breeder reactor were given with components based on GPT and the multicycle burnup effects on the sensitivity coefficient were discussed. (authors)
International Nuclear Information System (INIS)
Bambara, M.; Bousbia-Salah, A.; D'Auria, F.
2005-01-01
Full text of publication follows: In the last years a great concern about the neutron-3D/thermal-hydraulic codes coupling took place. Owing to the improved computational technology, 'best estimate' analyses are today a common tool to assess safety features, and they are necessary if an asymmetric behaviour in the core region exists, or if strong interactions between the core neutronics and reactor thermal-hydraulic occur. In order to validate the coupled codes performances, several international programmes were issued. Among these activities, the OECD/NEA BWR Turbine Trip (TT) was chosen for further sensitivity analyses. It consists of a turbine trip (TT) experiment carried out at the Peach Bottom 2 BWR. In this paper, the results of two different coupled codes systems are summarized and compared. The BWR TT simulations were carried out coupling the thermal-hydraulic system code RELAP5/mode 3.2 to the 3D neutron kinetics code Parcs/2.3, and also the system code ATHLET to the neutronics code QUABOX-CUBBOX. An exhaustive overview of the main features is given, and those aspects, which need further developments and experiences, are pointed out. (authors)
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
MOLE 2.0: Advanced approach for analysis of biomacromolecular channels
Sehnal D.; Varekova R.S.; Berka K.; Pravda L.; Navratilova V.; Banas P.; Ionescu C.-M.; Otyepka M.; Koca J.
2013-01-01
Background Channels and pores in biomacromolecules (proteins, nucleic acids and their complexes) play significant biological roles, e.g., in molecular recognition and enzyme substrate specificity. Results We present an advanced software tool entitled MOLE 2.0, which has been designed to analyze molecular channels and pores. Benchmark tests against other available software tools showed that MOLE 2.0 is by comparison quicker, more robust and more versatile. As a new feature, MOLE 2.0 estimates ...
Vuust, Peter; Witek, Maria A. G.
2014-01-01
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813
Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim
2014-01-01
In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer's G functions. Moreover, we present asymptotic expressions by utilizing the Meijer's G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.
Zedini, Emna
2014-07-16
In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.
Strain-free polished channel-cut crystal monochromators: a new approach and results
Kasman, Elina; Montgomery, Jonathan; Huang, XianRong; Lerch, Jason; Assoufid, Lahsen
2017-08-01
The use of channel-cut crystal monochromators has been traditionally limited to applications that can tolerate the rough surface quality from wet etching without polishing. We have previously presented and discussed the motivation for producing channel cut crystals with strain-free polished surfaces [1]. Afterwards, we have undertaken an effort to design and implement an automated machine for polishing channel-cut crystals. The initial effort led to inefficient results. Since then, we conceptualized, designed, and implemented a new version of the channel-cut polishing machine, now called C-CHiRP (Channel-Cut High Resolution Polisher), also known as CCPM V2.0. The new machine design no longer utilizes Figure-8 motion that mimics manual polishing. Instead, the polishing is achieved by a combination of rotary and linear functions of two coordinated motion systems. Here we present the new design of C-CHiRP, its capabilities and features. Multiple channel-cut crystals polished using the C-CHiRP have been deployed into several beamlines at the Advanced Photon Source (APS). We present the measurements of surface finish, flatness, as well as topography results obtained at 1-BM of APS, as compared with results typically achieved when polishing flat-surface monochromator crystals using conventional polishing processes. Limitations of the current machine design, capabilities and considerations for strain-free polishing of highly complex crystals are also discussed, together with an outlook for future developments and improvements.
ANNarchy: a code generation approach to neural simulations on parallel hardware
Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.
2015-01-01
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957
THEORETICAL AND PRACTICAL APPROACHES REGARDING THE ADOPTION OF CORPORATE GOVERNANCE CODES
Sorin Nicolae Borlea; Monica-Violeta Achim; Ludovica Breban
2013-01-01
In the European Union, the concept of corporate governance began to emerge more clearly after 1997, when most countries have however, voluntarily adopted corporate governance codes. The impulse of adopting these codes consists in the financial scandals related to the failure of the British companies listed on the stock exchange. Numerous scandals involving big companies such as Enron, WorldCom, Parmalat, Xerox, Merrill Lynch, Andersen and so on, conduct to a lack of investors’ confidence. ...
Working research codes into fluid dynamics education: a science gateway approach
Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.
2017-11-01
Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
Concurrent Transmission Based on Channel Quality in Ad Hoc Networks: A Game Theoretic Approach
Chen, Chen; Gao, Xinbo; Li, Xiaoji; Pei, Qingqi
In this paper, a decentralized concurrent transmission strategy in shared channel in Ad Hoc networks is proposed based on game theory. Firstly, a static concurrent transmissions game is used to determine the candidates for transmitting by channel quality threshold and to maximize the overall throughput with consideration of channel quality variation. To achieve NES (Nash Equilibrium Solution), the selfish behaviors of node to attempt to improve the channel gain unilaterally are evaluated. Therefore, this game allows each node to be distributed and to decide whether to transmit concurrently with others or not depending on NES. Secondly, as there are always some nodes with lower channel gain than NES, which are defined as hunger nodes in this paper, a hunger suppression scheme is proposed by adjusting the price function with interferences reservation and forward relay, to fairly give hunger nodes transmission opportunities. Finally, inspired by stock trading, a dynamic concurrent transmission threshold determination scheme is implemented to make the static game practical. Numerical results show that the proposed scheme is feasible to increase concurrent transmission opportunities for active nodes, and at the same time, the number of hunger nodes is greatly reduced with the least increase of threshold by interferences reservation. Also, the good performance on network goodput of the proposed model can be seen from the results.
Lee, Jieun; Wipf, Mathias; Mu, Luye; Adams, Chris; Hannant, Jennifer; Reed, Mark A
2017-01-15
We report a method to suppress streaming potential using an Ag-coated microfluidic channel on a p-type silicon nanowire (SiNW) array measured by a multiplexed electrical readout. The metal layer sets a constant electrical potential along the microfluidic channel for a given reference electrode voltage regardless of the flow velocity. Without the Ag layer, the magnitude and sign of the surface potential change on the SiNW depends on the flow velocity, width of the microfluidic channel and the device's location inside the microfluidic channel with respect to the reference electrode. Noise analysis of the SiNW array with and without the Ag coating in the fluidic channel shows that noise frequency peaks, resulting from the operation of a piezoelectric micropump, are eliminated using the Ag layer with two reference electrodes located at inlet and outlet. This strategy presents a simple platform to eliminate the streaming potential and can become a powerful tool for nanoscale potentiometric biosensors. Copyright Â© 2016 Elsevier B.V. All rights reserved.
A Gradually Varied Approach to Model Turbidity Currents in Submarine Channels
BollaÂ Pittaluga, M.; Frascati, A.; Falivene, O.
2018-01-01
We develop a one-dimensional model to describe the dynamics of turbidity current flowing in submarine channels. We consider the flow as a steady state polydisperse suspension accounting for water detrainment from the clear water-turbid interface, for spatial variations of the channel width and for water and sediment lateral overspill from the channel levees. Moreover, we account for sediment exchange with the bed extending the model to deal with situations where the current meets a nonerodible bed. Results show that when water detrainment is accounted for, the flow thickness becomes approximately constant proceeding downstream. Similarly, in the presence of channel levees, the flow tends to adjust to channel relief through the lateral loss of water and sediment. As more mud is spilled above the levees relative to sand, the flow becomes more sand rich proceeding downstream when lateral overspill is present. Velocity and flow thickness predicted by the model are then validated by showing good agreement with laboratory observations. Finally, the model is applied to the Monterey Canyon bathymetric data matching satisfactorily the December 2002 event field measurements and predicting a runout length consistent with observations.
Directory of Open Access Journals (Sweden)
Cevdet Kızıl
2014-08-01
Full Text Available The aim of this article is to investigate the impact of new Turkish commercial code and Turkish accounting standards on accounting education. This study takes advantage of the survey method for gathering information and running the research analysis. For this purpose, questionnaire forms are distributed to university students personally and via the internet.This paper includes significant research questions such as “Are accounting academicians informed and knowledgeable on new Turkish commercial code and Turkish accounting standards?”, “Do accounting academicians integrate new Turkish commercial code and Turkish accounting standards to their lectures?”, “How does modern accounting education methodology and technology coincides with the teaching of new Turkish commercial code and Turkish accounting standards?”, “Do universities offer mandatory and elective courses which cover the new Turkish commercial code and Turkish accounting standards?” and “If such courses are offered, what are their names, percentage in the curriculum and degree of coverage?”Research contributes to the literature in several ways. Firstly, new Turkish commercial code and Turkish accounting standards are current significant topics for the accounting profession. Furthermore, the accounting education provides a basis for the implementations in public and private sector. Besides, one of the intentions of new Turkish commercial code and Turkish accounting standards is to foster transparency. That is definitely a critical concept also in terms of mergers, acquisitions and investments. Stakeholders of today’s business world such as investors, shareholders, entrepreneurs, auditors and government are in need of more standardized global accounting principles Thus, revision and redesigning of accounting educations plays an important role. Emphasized points also clearly prove the necessity and functionality of this research.
International Nuclear Information System (INIS)
Daniska, Vladimir; Rehak, Ivan; Vasko, Marek; Ondra, Frantisek; Bezak, Peter; Pritrsky, Jozef; Zachar, Matej; Necas, Vladimir
2011-01-01
The document 'A Proposed Standardised List of Items for Costing Purposes' was issues in 1999 by OECD/NEA, IAEA and European Commission (EC) for promoting the harmonisation in decommissioning costing. It is a systematic list of decommissioning activities classified in chapters 01 to 11 with three numbered levels. Four cost group are defined for cost at each level. Document constitutes the standardised matrix of decommissioning activities and cost groups with definition of content of items. Knowing what is behind the items makes the comparison of cost for decommissioning projects transparent. Two approaches are identified for use of the standardised cost structure. First approach converts the cost data from existing specific cost structures into the standardised cost structure for the purpose of cost presentation. Second approach uses the standardised cost structure as the base for the cost calculation structure; the calculated cost data are formatted in the standardised cost format directly; several additional advantages may be identified in this approach. The paper presents the costing methodology based on the standardised cost structure and lessons learnt from last ten years of the implementation of the standardised cost structure as the cost calculation structure in the computer code OMEGA. Code include also on-line management of decommissioning waste, decay of radioactively, evaluation of exposure, generation and optimisation of the Gantt chart of a decommissioning project, which makes the OMEGA code an effective tool for planning and optimisation of decommissioning processes. (author)
Directory of Open Access Journals (Sweden)
Omid Bavi
2016-02-01
Full Text Available Mechanosensitive (MS channels are ubiquitous molecular force sensors that respond to a number of different mechanical stimuli including tensile, compressive and shear stress. MS channels are also proposed to be molecular curvature sensors gating in response to bending in their local environment. One of the main mechanisms to functionally study these channels is the patch clamp technique. However, the patch of membrane surveyed using this methodology is far from physiological. Here we use continuum mechanics to probe the question of how curvature, in a standard patch clamp experiment, at different length scales (global and local affects a model MS channel. Firstly, to increase the accuracy of the Laplace’s equation in tension estimation in a patch membrane and to be able to more precisely describe the transient phenomena happening during patch clamping, we propose a modified Laplace’s equation. Most importantly, we unambiguously show that the global curvature of a patch, which is visible under the microscope during patch clamp experiments, is of negligible energetic consequence for activation of an MS channel in a model membrane. However, the local curvature (RL < 50 and the direction of bending are able to cause considerable changes in the stress distribution through the thickness of the membrane. Not only does local bending, in the order of physiologically relevant curvatures, cause a substantial change in the pressure profile but it also significantly modifies the stress distribution in response to force application. Understanding these stress variations in regions of high local bending is essential for a complete understanding of the effects of curvature on MS channels.
Directory of Open Access Journals (Sweden)
Julia eKozlik
2015-05-01
Full Text Available Several emotion theorists suggest that valenced stimuli automatically trigger motivational orientations and thereby facilitate corresponding behavior. Positive stimuli were thought to activate approach motivational circuits which in turn primed approach-related behavioral tendencies whereas negative stimuli were supposed to activate avoidance motivational circuits so that avoidance-related behavioral tendencies were primed (motivational orientation account. However, recent research suggests that typically observed affective stimulus–response compatibility phenomena might be entirely explained in terms of theories accounting for mechanisms of general action control instead of assuming motivational orientations to mediate the effects (evaluative coding account. In what follows, we explore to what extent this notion is applicable. We present literature suggesting that evaluative coding mechanisms indeed influence a wide variety of affective stimulus–response compatibility phenomena. However, the evaluative coding account does not seem to be sufficient to explain affective S–R compatibility effects. Instead, several studies provide clear evidence in favor of the motivational orientation account that seems to operate independently of evaluative coding mechanisms. Implications for theoretical developments and future research designs are discussed.
Kozlik, Julia; Neumann, Roland; Lozo, Ljubica
2015-01-01
Several emotion theorists suggest that valenced stimuli automatically trigger motivational orientations and thereby facilitate corresponding behavior. Positive stimuli were thought to activate approach motivational circuits which in turn primed approach-related behavioral tendencies whereas negative stimuli were supposed to activate avoidance motivational circuits so that avoidance-related behavioral tendencies were primed (motivational orientation account). However, recent research suggests that typically observed affective stimulus-response compatibility phenomena might be entirely explained in terms of theories accounting for mechanisms of general action control instead of assuming motivational orientations to mediate the effects (evaluative coding account). In what follows, we explore to what extent this notion is applicable. We present literature suggesting that evaluative coding mechanisms indeed influence a wide variety of affective stimulus-response compatibility phenomena. However, the evaluative coding account does not seem to be sufficient to explain affective S-R compatibility effects. Instead, several studies provide clear evidence in favor of the motivational orientation account that seems to operate independently of evaluative coding mechanisms. Implications for theoretical developments and future research designs are discussed.
Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments
Directory of Open Access Journals (Sweden)
Omar Lopez-Rincon
2017-01-01
Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.
Modeling approach for annular-fuel elements using the ASSERT-PV subchannel code
International Nuclear Information System (INIS)
Dominguez, A.N.; Rao, Y.
2012-01-01
The internally and externally cooled annular fuel (hereafter called annular fuel) is under consideration for a new high burn-up fuel bundle design in Atomic Energy of Canada Limited (AECL) for its current, and its Generation IV reactor. An assessment of different options to model a bundle fuelled with annular fuel elements is presented. Two options are discussed: 1) Modify the subchannel code ASSERT-PV to handle multiple types of elements in the same bundle, and 2) coupling ASSERT-PV with an external application. Based on this assessment, the selected option is to couple ASSERT-PV with the thermalhydraulic system code CATHENA. (author)
A Vector AutoRegressive (VAR) Approach to the Credit Channel for ...
African Journals Online (AJOL)
This paper is an attempt to determine the presence and empirical significance of monetary policy and the bank lending view of the credit channel for Mauritius, which is particularly relevant at these times. A vector autoregressive (VAR) model of order three is used to examine the monetary transmission mechanism using ...
Gulothungan, G.; Malathi, R.
2018-04-01
Disturbed sodium (Na+) and calcium (Ca2+) handling is known to be a major predisposing factor for life-threatening cardiac arrhythmias. Cardiac contractility in ventricular tissue is prominent by Ca2+ channels like voltage dependent Ca2+ channels, sodium-calcium exchanger (Na+-Ca2+x) and sacroplasmicrecticulum (SR) Ca2+ pump and leakage channels. Experimental and clinical possibilities for studying cardiac arrhythmias in human ventricular myocardium are very limited. Therefore, the use of alternative methods such as computer simulations is of great importance. Our aim of this article is to study the impact on action potential (AP) generation and propagation in single ventricular myocyte and ventricular tissue under different dysfunction Ca2+ channels condition. In enhanced activity of Na+-Ca2+x, single myocyte produces AP duration (APD90) and APD50 is significantly smaller (266 ms and 235 ms). Its Na+-Ca2+x current at depolarization is increases 60% from its normal level and repolarization current goes more negative (nonfailing= -0.28 pA/pF and failing= -0.47 pA/pF). Similarly, same enhanced activity of Na+-Ca2+x in 10 mm region of ventricular sheet, raises the plateau potential abruptly, which ultimately affects the diastolic repolarization. Compare with normal ventricular sheet region of 10 mm, 10% of ventricular sheet resting state is reduces and ventricular sheet at time 250 ms is goes to resting state very early. In hypertrophy condition, single myocyte produces APD90 and APD50 is worthy of attention smaller (232 mS and 198 ms). Its sodium-potassium (Na+-K+) pump current is 75% reduces from its control conditions (0.13 pA/pF). Hypertrophy condition, 50% of ventricular sheet is reduces to minimum plateau potential state, that starts the repolarization process very early and reduces the APD. In a single failing SR Ca2+ channels myocyte, recovery of Ca2+ concentration level in SR reduces upto 15% from its control myocytes. At time 290 ms, 70% of ventricular sheet
A model code on co-determination and CSR : The Netherlands: A bottom-up approach
Lambooy, T.E.
2011-01-01
This article discusses the works council’s role in the determination of a company’s CSR strategy and the implementation thereof throughout the organisation. The association of the works councils of multinational companies with a base in the Netherlands has recently developed a ‘Model Code on
THEORETICAL AND PRACTICAL APPROACHES REGARDING THE ADOPTION OF CORPORATE GOVERNANCE CODES
Directory of Open Access Journals (Sweden)
Sorin Nicolae Borlea
2013-09-01
Full Text Available In the European Union, the concept of corporate governance began to emerge more clearly after 1997, when most countries have however, voluntarily adopted corporate governance codes. The impulse of adopting these codes consists in the financial scandals related to the failure of the British companies listed on the stock exchange. Numerous scandals involving big companies such as Enron, WorldCom, Parmalat, Xerox, Merrill Lynch, Andersen and so on, conduct to a lack of investors’ confidence. These crises that have started to alarm governments, supervisory authorities, companies, investors and even the general public because of the fragility of the corporate governance’s system, highlight the need to rethink its structures. The process of adapting the corporate governance provisions in order to ensure transparency, responsibility and fair treatment of shareholders has resulted in the development of Corporate Governance Principles by the Organization for Economic Cooperation and Development (OECD. In order to asses these principles, it has started to identify the common elements of codes, one the most effective practice models of governance. Once the benefits of corporate governance practices have been understood and assimilated by the developed country, the developing countries (also Romania have begun to adopt "the best practices" in corporate governance, especially because this need is acutely felt in the changes required by the transition to a market economy. Our article describes the origins of the corporate governance, the concept and evolution of the corporate governance code at an international level, European level and also at a Romanian level.
Mentor Texts and the Coding of Academic Writing Structures: A Functional Approach
Escobar Alméciga, Wilder Yesid; Evans, Reid
2014-01-01
The purpose of the present pedagogical experience was to address the English language writing needs of university-level students pursuing a degree in bilingual education with an emphasis in the teaching of English. Using mentor texts and coding academic writing structures, an instructional design was developed to directly address the shortcomings…
International Nuclear Information System (INIS)
Gara, P.; Martin, E.
1983-01-01
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.
2015-05-01
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł
2016-12-01
One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure
A Simplified Multipath Component Modeling Approach for High-Speed Train Channel Based on Ray Tracing
Directory of Open Access Journals (Sweden)
Jingya Yang
2017-01-01
Full Text Available High-speed train (HST communications at millimeter-wave (mmWave band have received a lot of attention due to their numerous high-data-rate applications enabling smart rail mobility. Accurate and effective channel models are always critical to the HST system design, assessment, and optimization. A distinctive feature of the mmWave HST channel is that it is rapidly time-varying. To depict this feature, a geometry-based multipath model is established for the dominant multipath behavior in delay and Doppler domains. Because of insufficient mmWave HST channel measurement with high mobility, the model is developed by a measurement-validated ray tracing (RT simulator. Different from conventional models, the temporal evolution of dominant multipath behavior is characterized by its geometry factor that represents the geometrical relationship of the dominant multipath component (MPC to HST environment. Actually, during each dominant multipath lifetime, its geometry factor is fixed. To statistically model the geometry factor and its lifetime, the dominant MPCs are extracted within each local wide-sense stationary (WSS region and are tracked over different WSS regions to identify its “birth” and “death” regions. Then, complex attenuation of dominant MPC is jointly modeled by its delay and Doppler shift both which are derived from its geometry factor. Finally, the model implementation is verified by comparison between RT simulated and modeled delay and Doppler spreads.
A unitary approach to the coupling between the NN and πNN channels
International Nuclear Information System (INIS)
Blankleider, B.
1980-11-01
Some basic properties of the πNN system, in particular its coupling to the NN channel, are investigated. A set of linear integral equations that couple the N-N to the π-d channel, and satisfy two- and three-body unitarity is derived. By including the π-N amplitude in the P 11 channel, and retaining certain disconnected diagrams, it is found that the propagators for the nucleons, and form factors for the vertices, become dressed without changing the basic structure of the equations. For the numerical solution relativistic kinematics for the pion and non-relativistic kinematics for the nucleons are used. There is uncertainty about the importance of real pion absorption in the π-d elastic scattering reaction. Although the effect of absorption can be very large, its influence is cancelled to a large extent by the further inclusion of P 11 rescattering. The inclusion of absorption signnificantly lowers the dips in the π-d differential cross sections at higher energies. The model is able to reproduce the sole experimental value of the tensor polarization t 20 at 180 deg. so far available. Numerical results for the reaction NN→πd are in excellent agreement with the differential cross sections at all but the very high energies
Review of solution approach, methods, and recent results of the RELAP5 system code
International Nuclear Information System (INIS)
Trapp, J.A.; Ransom, V.H.
1983-01-01
The present RELAP5 code is based on a semi-implicit numerical scheme for the hydrodynamic model. The basic guidelines employed in the development of the semi-implicit numerical scheme are discussed and the numerical features of the scheme are illustrated by analysis for a simple, but analogous, single-equation model. The basic numerical scheme is recorded and results from several simulations are presented. The experimental results and code simulations are used in a complementary fashion to develop insights into nuclear-plant response that would not be obtained if either tool were used alone. Further analysis using the simple single-equation model is carried out to yield insights that are presently being used to implement a more-implicit multi-step scheme in the experimental version of RELAP5. The multi-step implicit scheme is also described
Mentor Texts and the Coding of Academic Writing Structures: A Functional Approach
Directory of Open Access Journals (Sweden)
Wilder Yesid Escobar Alméciga
2014-10-01
Full Text Available The purpose of the present pedagogical experience was to address the English language writing needs of university-level students pursuing a degree in bilingual education with an emphasis in the teaching of English. Using mentor texts and coding academic writing structures, an instructional design was developed to directly address the shortcomings presented through a triangulated needs analysis. Through promoting awareness of international standards of writing as well as fostering an understanding of the inherent structures of academic texts, a methodology intended to increase academic writing proficiency was explored. The study suggests that mentor texts and the coding of academic writing structures can have a positive impact on the production of students’ academic writing.
Medicine, material science and security: the versatility of the coded-aperture approach.
Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A
2014-03-06
The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.
Cevdet Kızıl; Ayşe Tansel Çetin; Ahmed Bulunmaz
2014-01-01
The aim of this article is to investigate the impact of new Turkish commercial code and Turkish accounting standards on accounting education. This study takes advantage of the survey method for gathering information and running the research analysis. For this purpose, questionnaire forms are distributed to university students personally and via the internet.This paper includes significant research questions such as “Are accounting academicians informed and knowledgeable on new Turkish commerc...
Energy Technology Data Exchange (ETDEWEB)
Gomes, Renato G.; Rebello, Wilson F.; Vellozo, Sergio O.; Moreira Junior, Luis, E-mail: renatoguedes@ime.eb.br, E-mail: rebello@ime.eb.br, E-mail: eng.cavaliere@gmail.com, E-mail: vellozo@cbpf.br, E-mail: luisjrmoreira@hotmail.com [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil); Vital, Helio C., E-mail: vital@ctex.eb.br [Centro Tecnologico do Exercito (CTEX), Barra de Guaratiba, RJ (Brazil); Rusin, Tiago, E-mail: tiago.rusin@mma.gov.br [Ministerio do Meio Ambiente (MMA), Brasilia, DF (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
In order to evaluate new lines of research in the area of irradiation of materials external to the research irradiating facility Army Technology Center (CTEx), it is necessary to study security parameters and magnitude of the dose rates from their channels of escape. The objective was to calculate, with the code MCNPX, dose rates (Gy / min) on the interior and exterior of the four-channel leakage gamma irradiator. The channels were designed to leak radiation on materials properly disposed in the area outside the irradiator larger than the expected volume of irradiation chambers (50 liters). This study aims to assess the magnitude of dose rates within the channels, as well as calculate the angle of beam output range outside the channel for analysis as to its spread, and evaluation of safe conditions of their operators (protection radiological). The computer simulation was performed by distributing virtual dosimeter ferrous sulfate (Fricke) in the longitudinal axis of the vertical drain channels (anterior and posterior) and horizontal (top and bottom). The results showed a collimating the beams irradiated on each of the channels to the outside, with values of the order of tenths of Gy / min as compared to the maximum amount of operation of the irradiator chamber (33 Gy / min). The external beam irradiation in two vertical channels showed a distribution shaped 'trunk pyramid', not collimated, so scattered, opening angle 83 ° in the longitudinal direction and 88 in the transverse direction. Thus, the cases allowed the evaluation of materials for irradiation outside the radiator in terms of the magnitude of the dose rates and positioning of materials, and still be able to take the necessary care in mounting shield for radiation protection by operators, avoiding exposure to ionizing radiation. (author)
Gan, Fuping; Han, Kai; Lan, Funing; Chen, Yuling; Zhang, Wei
2017-01-01
Mengzi locates in the south 20 km away from the outlet of Nandong subsurface river, and has been suffering from water deficiency in recent years. It is necessary to find out the water resources underground according to the geological characteristics such as the positions and buried depths of the underground river to improve the civil and industrial environments. Due to the adverse factors such as topographic relief, bare rocks in karst terrains, the geophysical approaches, such as Controlled Source Audio Magnetotellurics and Seismic Refraction Tomography, were used to roughly identify faults and fracture zones by the geophysical features of low resistivity and low velocity, and then used the mise-a-la-masse method to judge which faults and fracture zones should be the potential channels of the subsurface river. Five anomalies were recognized along the profile of 2.4 km long and showed that the northeast river system has several branches. Drilling data have proved that the first borehole indicated a water bearing channel by a characteristics of rock core of river sands and gravels deposition, the second one encountered water-filled fracture zone with abundant water, and the third one exposed mud-filled fracture zone without sustainable water. The results from this case study show that the combination of Controlled Source Audio Magnetotellurics, Seismic Refraction Tomography and mise-a-la-Masse is one of the effective methods to detect water-filled channels or fracture zones in karst terrains.
Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
WYSIWIB: A Declarative Approach to Finding Protocols and Bugs in Linux Code
DEFF Research Database (Denmark)
Lawall, Julia Laetitia; Brunel, Julien Pierre Manuel; Hansen, Rene Rydhof
2008-01-01
the tools to be able to find specific kinds of bugs. In this paper, we propose a declarative approach based on a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer is able to express specifications for protocol and bug finding using...
WYSIWYB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code
DEFF Research Database (Denmark)
Lawall, Julia; Lawall, Julia; Palix, Nicolas
2009-01-01
the tools to be able to find specific kinds of bugs. In this paper, we propose a declarative approach based on a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer is able to express specifications for protocol and bug finding using...
Oladinrin, Olugbenga Timo; Ho, Christabel Man-Fong
2016-08-01
Several researchers have identified codes of ethics (CoEs) as tools that stimulate positive ethical behavior by shaping the organisational decision-making process, but few have considered the information needed for code implementation. Beyond being a legal and moral responsibility, ethical behavior needs to become an organisational priority, which requires an alignment process that integrates employee behavior with the organisation's ethical standards. This paper discusses processes for the responsible implementation of CoEs based on an extensive review of the literature. The internationally recognized European Foundation for Quality Management Excellence Model (EFQM model) is proposed as a suitable framework for assessing an organisation's ethical performance, including CoE embeddedness. The findings presented herein have both practical and research implications. They will encourage construction practitioners to shift their attention from ethical policies to possible enablers of CoE implementation and serve as a foundation for further research on ethical performance evaluation using the EFQM model. This is the first paper to discuss the model's use in the context of ethics in construction practice.
Quantum internet using code division multiple access
Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco
2013-01-01
A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488
Energy Technology Data Exchange (ETDEWEB)
Batet, L., E-mail: lluis.batet@upc.edu [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Physics and Nuclear Engineering (DFEN), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Fradera, J. [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Physics and Nuclear Engineering (DFEN), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Valls, E. Mas de les [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Heat Engines (DMMT), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Sedano, L.A. [EURATOM-CIEMAT Association, Fusion Technology Division, Av. Complutense 22, 28040 Madrid (Spain)
2011-06-15
Large helium (He) production rates in liquid metal breeding blankets of a DT fusion reactor might have a significant influence in the system design. Low He solubility together with high local concentrations may create the conditions for He cavitation, which would have an impact in the components performance. The paper states that such a possibility is not remote in a helium cooled lithium-lead breeding blanket design. A model based on the Classical Nucleation Theory (CNT) has been developed and implemented in order to have a specific tool able to simulate HCLL systems and identify the key parameters and sensitivities. The nucleation and growth model has been implemented in the open source CFD code OpenFOAM so that transport of dissolved atomic He and nucleated He bubbles can be simulated. At the current level of development it is assumed that void fraction is small enough not to affect either the hydrodynamics or the properties of the liquid metal; thus, bubbles can be represented by means of a passive scalar. He growth and transport has been implemented using the mean radius approach in order to save computational time. Limitations and capabilities of the model are shown by means of zero-dimensional simulation and sensitivity analysis under HCLL breeding unit conditions.
Polynomial theory of error correcting codes
Cancellieri, Giovanni
2015-01-01
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Quantum communication under channel uncertainty
International Nuclear Information System (INIS)
Noetzel, Janis Christian Gregor
2012-01-01
This work contains results concerning transmission of entanglement and subspaces as well as generation of entanglement in the limit of arbitrary many uses of compound- and arbitrarily varying quantum channels (CQC, AVQC). In both cases, the channel is described by a set of memoryless channels. Only forward communication between one sender and one receiver is allowed. A code is said to be ''good'' only, if it is ''good'' for every channel out of the set. Both settings describe a scenario, in which sender and receiver have only limited channel knowledge. For different amounts of information about the channel available to sender or receiver, coding theorems are proven for the CQC. For the AVQC, both deterministic and randomised coding schemes are considered. Coding theorems are proven, as well as a quantum analogue of the Ahlswede-dichotomy. The connection to zero-error capacities of stationary memoryless quantum channels is investigated. The notion of symmetrisability is defined and used for both classes of channels.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Directory of Open Access Journals (Sweden)
Philipp Sterzer
2016-10-01
Full Text Available Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from nowhere, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: Ichstörungen of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a becoming sensory of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, internal events such as thoughts (including volitions, emotions and memories can also be associated with increased prediction-error signaling and are thus imbued with
Directory of Open Access Journals (Sweden)
Azadeh Hashemian
2008-06-01
Full Text Available Enhanced surface heat exchangers are commonly used all worldwide. If applicable, due to their complicated geometry, simulating corrugated plate heat exchangers is a time-consuming process. In the present study, first we simulate the heat transfer in a sharp V-shape corrugation cell with constant temperature walls; then, we use a Locally Linear Neuro-Fuzzy method based on a radial basis function (RBFs to model the temperature field in the whole channel. New approach is developed to deal with fast computational and low memory resources that can be used with the largest available data sets. The purpose of the research is to reveal the advantages of proposed Neuro-Fuzzy model as a powerful modeling system designed for predicting and to make a fair comparison between it and the successful FLUENT simulated approaches in its best structures.
Convergence of an L2-approach in the coupled-channels optical potential method for e-H scattering
International Nuclear Information System (INIS)
Bray, I.; Konovalov, D.A.; McCarthy, I.E.
1990-08-01
An L 2 approach to the coupled-channels optical method is studied. The investigation is done for electron-hydrogen elastic scattering at projectile energies of 30, 50, 100 and 200 eV. Weak coupling, free-particle Green's function and no exchange in Q-space are appoximations used to calculate the polarization potential. This model problem is solved exactly using actual hydrogen discrete and continuum functions. The convergence of an L 2 approach with the Laguerre basis to the exact result is investigated. It is found that a basis of 10 Laguerre functions is sufficient for convergence of approximately 5% in the polarization potential matrix elements and 2% in the differential cross sections for non-large angles. The convergence is faster for smaller energies. In general, the convergence to the exact result is slow. 12 refs., 2 tabs., 2 figs
Measuring customer service quality in international marketing channels: a multimethod approach
Wetzels, M.G.M.; Ruyter, de J.C.; Lemmink, J.G.A.M.; Koelemeijer, K.
1995-01-01
The measurement of perceived service quality using the SERVQUAL approach has been criticized by a number of authors recently. This criticism concerns the conceptual basis of this methodology as well as its empirical operationalization. Presents a complementary approach to measuring service quality
Enhanced MicroChannel Heat Transfer in Macro-Geometry using Conventional Fabrication Approach
Ooi, KT; Goh, AL
2016-09-01
This paper presents studies on passive, single-phase, enhanced microchannel heat transfer in conventionally sized geometry. The intention is to allow economical, simple and readily available conventional fabrication techniques to be used for fabricating macro-scale heat exchangers with microchannel heat transfer capability. A concentric annular gap between a 20 mm diameter channel and an 19.4 mm diameter insert forms a microchannel where heat transfer occurs. Results show that the heat transfer coefficient of more than 50 kW/m·K can be obtained for Re≈4,000, at hydraulic diameter of 0.6 mm. The pressure drop values of the system are kept below 3.3 bars. The present study re-confirms the feasibility of fabricating macro-heat exchangers with microchannel heat transfer capability.
Sum rule approach to the nuclear response in the isovector spin channel
International Nuclear Information System (INIS)
Alberico, W.M.; Ericson, M.; Molinari, A.
1982-01-01
We study the global features of the response of infinite nuclear matter in the spin-isospin channel through the energy weighted sum rules S 1 and Ssub(-) 1 . In particular we compare the outcome of the ring approximation with the exact RPA evaluation of the sum rules. We also investigate the influence of the collective character of the response, induced by the particle hole force for a longitudinal and transverse spin couplings. We show that S 1 is insensitive to the collectivity of the response, as long as the Δ degree of freedom is ignored. The inverse energy weighted sum rule on the other hand, which is linked to the paramagnetic susceptibility, always reflects the hardening or softening of the nuclear response, due to the repulsive or attractive character of the p-h force. This quantity is well suited to the comparison with the experiments, which we perform for 12 C and 56 Fe. (orig.)
A Joint Approach for Single-Channel Speaker Identification and Speech Separation
DEFF Research Database (Denmark)
Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll
2012-01-01
) accuracy, here, we report the objective and subjective results as well. The results show that the proposed system performs as well as the best of the state-of-the-art in terms of perceived quality while its performance in terms of speaker identification and automatic speech recognition results......In this paper, we present a novel system for joint speaker identification and speech separation. For speaker identification a single-channel speaker identification algorithm is proposed which provides an estimate of signal-to-signal ratio (SSR) as a by-product. For speech separation, we propose...... a sinusoidal model-based algorithm. The speech separation algorithm consists of a double-talk/single-talk detector followed by a minimum mean square error estimator of sinusoidal parameters for finding optimal codevectors from pre-trained speaker codebooks. In evaluating the proposed system, we start from...
Approaching application of risk-based inspection to ASME code section XI
International Nuclear Information System (INIS)
Hedden, Owen F.
1995-01-01
This paper will describe current efforts within the ASME Boiler and Pressure Vessel Committee's Subcommittee on Nuclear Inservice Inspection to introduce risk-based technology to optimize inservice inspection of nuclear power plants. The subcommittee is responsible for the content of ASME Boiler and Pressure Vessel Code Section XI, Rules for Inservice Inspection of Nuclear Power Plant Components. The paper will first provide the historical background for the inspection program currently in Section XI. It will then describe the development of new technology through the ASME Center for Research and Technology Development program. Next, the work now going on in two of the groups under the Section XI committee will be described in detail. Each of these two efforts is directed toward the application of new risk-based inspection technology to nuclear piping systems. Finally, the directions of additional research and applications of the technology will be discussed. (author)
A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes
Directory of Open Access Journals (Sweden)
Patenaude François
2006-01-01
Full Text Available Since Berrou, Glavieux and Thitimajshima published their landmark paper in 1993, different modified BCJR MAP algorithms have appeared in the literature. The existence of a relatively large number of similar but different modified BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions. What is the relationship among the different modified BCJR MAP algorithms? What are their relative performance, computational complexities, and memory requirements? In this paper, we answer these questions. We derive systematically four major modified BCJR MAP algorithms from the BCJR MAP algorithm using simple mathematical transformations. The connections between the original and the four modified BCJR MAP algorithms are established. A detailed analysis of the different modified BCJR MAP algorithms shows that they have identical computational complexities and memory requirements. Computer simulations demonstrate that the four modified BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.
Review of solution approach, methods, and recent results of the TRAC-PF1 system code
International Nuclear Information System (INIS)
Mahaffy, J.H.; Liles, D.R.; Knight, T.D.
1983-01-01
The current version of the Transient Reactor Analysis Code (TRAC-PF1) was created to improve on the capabilities of its predecessor (TRAC-PD2) for analyzing slow reactor transients such as small-break loss-of-coolant accidents. TRAC-PF1 continues to use a semi-implicit finite-difference method for modeling three-dimensional flows in the reactor vessel. However, it contains a new stability-enhancing two-step (SETS) finite-difference tecnique for one-dimensional flow calculations. This method is not restricted by a material Courant stability condition, allowing much larger time-step sizes during slow transients than would a semi-implicit method. These have been successfully applied to the analysis of a variety of experiments and hypothetical plant transients covering a full range of two-phase flow regimes
Coded moderator approach for fast neutron source detection and localization at standoff
Energy Technology Data Exchange (ETDEWEB)
Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)
2015-06-01
Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.
Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-05
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.
Directory of Open Access Journals (Sweden)
Félix Gontier
2017-11-01
Full Text Available The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1 the estimation of standard acoustic indicators; and (2 the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.
A multi-scalar approach for modelling river channel change in the Anthropocene
Downs, Peter; Piégay, Hervé; Piffady, Jeremy; Valette, Laurent; Vaudor, Lise
2017-04-01
Adjustments in river channel morphology during the 'Anthropocene' arise as a cumulative impact from the influence of numerous natural and human stressors operating at multiple spatial and temporal scales. However, the research requirement for data on impacts at multiple scales, and at sufficiently high spatial and temporal resolution to determine reach-level effect, largely prevented such studies until recent improvements in digital technologies and data availability. A meta-analysis of recent cumulative impact studies indicates that the analytical component is still overwhelmingly interpretative, with cause-and-effect reasoning based largely on temporal synchronicity and spatial proximity, whereas our conceptual understanding of adjustment processes is far more nuanced. We propose, instead, that studies of cumulative impact should be underpinned by an analytical model of cause and effect, partly to test and enhance our predictive capabilities and allow scenario setting, but also to learn about the relative sensitivities involved in different parts of the model and thus to prioritize future research endeavours. Our requirements are that the model should be inherently designed to detect reach-level changes over Anthropocene timescales, be capable of integrating co-existing and hierarchical human and natural pressures on fluvial systems, be able to accommodate time-lagged effects and upstream-downstream connectivity, and be based on an explicit conceptual model that can be refined as our process understanding improves. Bayesian Belief Networks (BBNs) offer some potential in this regard and are becoming an increasingly popular option for dealing with complex, multi-scalar relationships in ecology and other environmental sciences. BBNs consist of a conceptual model of nodes and edges (i.e., graph theory) that qualitatively describe the structure of causal relationships between chains of variables, and a quantitative expression of the relative strength of the
Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele
2014-01-01
The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...
Defining ‘sensitive’ health status: a systematic approach using health code terminologies.
Directory of Open Access Journals (Sweden)
Andy Boyd
2017-04-01
We have demonstrated a systematic and partially interoperable approach to defining ‘sensitive’ health information. However, any such exercise is likely to include decisions which will be open to interpretation and open to change over time. As such, the application of this technique should be embedded within an appropriate governance framework which can accommodate misclassification while minimising potential patient harm.
Directory of Open Access Journals (Sweden)
Weimin Ma
2018-06-01
Full Text Available In this paper, we investigate economic performance and environmental performance of a dual-channel green supply chain (GSC. Given that most relevant literature still focus on the descriptive aspect of GSC, we adopt game theoretic approach rather than qualitative analysis method to address the following problems: (1 How can the integration of environmental and economic sustainability goals be achieved in GSC? (2 What is the impact of customer environmental awareness on the green level and profitability of the GSC? (3 How does the market demand changes in the presence of the online direct channel in addition to the traditional one? We establish four game models, which are decentralized scenario, centralized scenario, retailer-led revenue-sharing scenario and bargaining revenue-sharing scenario. In the decentralized scenario, participants in a GSC make individual decisions based on their specific interests. In the centralized scenario, the GSC is regarded as a whole and the participants make collective decisions to maximize the overall profit of the GSC. In addition, in the two revenue-sharing scenarios, revenue-sharing contracts as the important profit coordination systems are set up and the revenue-sharing ratio is determined either by the retailer or through bargaining. Moreover, the cost of green product research and development, customer environmental awareness and price sensitivity are also taken into account in the four scenarios. By comparing and analyzing the four game models, we recommend the two revenue-sharing scenarios as the optimum choice and improving green awareness as a feasible strategy to achieve the integration of economic and environmental goals of the GSC. Additionally, we find that online sales has become a major distribution channel of the GSC.
Many channel spectrum unfolding
International Nuclear Information System (INIS)
Najzer, M.; Glumac, B.; Pauko, M.
1980-01-01
The principle of the ITER unfolding code as used for the many channel spectrum unfolding is described. Its unfolding ability is tested on seven typical neutron spectra. The effect of the initial spectrum approximation upon the solution is discussed
A Novel Approach to Achieve the Perfect Security through AVK over Insecure Communication Channel
Banerjee, Subhasish; Dutta, Manash Pratim; Bhunia, Chandan Tilak
2017-04-01
To enhance the security level of the cryptosystem in shared encrypted data over the insecure channel; Automatic variable key (AVK) is a perfect mechanism as being experimented by many researchers. In AVK, after establishment of the secret key (through some IKE protocols, like IKEv2 or 2 PAKA or 3 PAKA, etc), the successive keys are generated that are variable in nature from session to session by using time variant key technique. In this work, it is shown that how AVK can provide higher security than fixed key from well-known plaintext attack (for example, brute force attack) and ciphertext only attack (for example, frequency attack etc) due to randomness of keys. In order to improve the level of randomness among the key set, a new method is proposed to generate keys where the randomness are achieved not only in terms of change in bits sequence but also flexible in size as well. Randomness of the key set is also compared with other related time variant key mechanisms to prove superiority.
One-way quantum repeaters with quantum Reed-Solomon codes
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-05-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.
An XML Approach of Coding a Morphological Database for Arabic Language
Gridach, Mourad; Chenfour, Noureddine
2011-01-01
We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA). Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP). For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a...
International Nuclear Information System (INIS)
Giles, G.E.; DeVault, R.M.; Turner, W.D.; Becker, B.R.
1976-05-01
A description is given of the development and verification of a generalized coupled conduction-convection, multichannel heat transfer computer program to analyze specific safety questions involving high temperature gas-cooled reactors (HTGR). The HEXEREI code was designed to provide steady-state and transient heat transfer analysis of the HTGR active core using a basic hexagonal mesh and multichannel coolant flow. In addition, the core auxiliary cooling systems were included in the code to provide more complete analysis of the reactor system during accidents involving reactor trip and cooling down on the auxiliary systems. Included are brief descriptions of the components of the HEXEREI code and sample HEXEREI analyses compared with analytical solutions and other heat transfer codes
Energy Technology Data Exchange (ETDEWEB)
Oezdemir, Erdal; Moon, Kang Hoon; Oh, Seung Jong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Kim, Yongdeog [KHNP-CRI, Daejeon (Korea, Republic of)
2014-10-15
Subchannel analysis plays important role to evaluate safety critical parameters like minimum departure from nucleate boiling ratio (MDNBR), peak clad temperature and fuel centerline temperature. In this study, two different subchannel codes, VIPRE-01 (Versatile Internals and Component Program for Reactors: EPRI) and THALES (Thermal Hydraulic AnaLyzer for Enhanced Simulation of core) are examined. In this study, two different transient cases for which MDNBR result play important role are selected to conduct analysis with THALES and VIPRE-01 subchannel codes. In order to get comparable results same core geometry, fuel parameters, correlations and models are selected for each code. MDNBR results from simulations by both code are agree with each other with negligible difference. Whereas, simulations conducted by enabling conduction model in VIPRE-01 shows significant difference from the results of THALES.
Naud, Richard; Gerstner, Wulfram
2012-01-01
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.
Nijhof, A.H.J.; Cludts, Stephan; Fisscher, O.A.M.; Laan, Albertus
2003-01-01
More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then
Devakar, M.; Raje, Ankush
2018-05-01
The unsteady flow of two immiscible micropolar and Newtonian fluids through a horizontal channel is considered. In addition to the classical no-slip and hyper-stick conditions at the boundary, it is assumed that the fluid velocities and shear stresses are continuous across the fluid-fluid interface. Three cases for the applied pressure gradient are considered to study the problem: one with constant pressure gradient and the other two cases with time-dependent pressure gradients, viz. periodic and decaying pressure gradient. The Crank-Nicolson approach has been used to obtain numerical solutions for fluid velocity and microrotation for diverse sets of fluid parameters. The nature of fluid velocities and microrotation with various values of pressure gradient, Reynolds number, ratio of viscosities, micropolarity parameter and time is illustrated through graphs. It has been observed that micropolarity parameter and ratio of viscosities reduce the fluid velocities.
Study of doubly excited states of H- and He in the coupled-channel hypersperical adiabatic approach
International Nuclear Information System (INIS)
Abrashkevich, A.G.; Abrashkevich, D.G.; Vinitskij, S.I.; Kaschiev, M.S.; Puzynin, I.V.
1989-01-01
Doubly excited states (DES) of H - and He are investigated within the coupled-channel hyperspherical adiabatic (HSA) approach. Influence of the angular and radial electron correlations on the rate of convergence of the values of the potential curves and matrix elements of radial coupling is studied numerically. The scheme based on molecular classification of the HSA basis states is used for the classification of DES. The results of the multichannel calculations of 1 S e and 1 P 0 DES of H - and He below the second threshold are presented. The obtained results are compared with other calculations and experiment. The region of applicability of the adiabatic approximation is discussed. 75 refs.; 10 tabs
An XML Approach of Coding a Morphological Database for Arabic Language
Directory of Open Access Journals (Sweden)
Mourad Gridach
2011-01-01
Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.
International Nuclear Information System (INIS)
G. Palmiotti; M. Salvatores; G. Aliberti
2007-01-01
The validation of advanced simulation tools will still play a very significant role in several areas of reactor system analysis. This is the case of reactor physics and neutronics, where nuclear data uncertainties still play a crucial role for many core and fuel cycle parameters. The present paper gives a summary of validation motivations, objectives and approach. A validation effort is in particular necessary in the frame of advanced (e.g. Generation-IV or GNEP) reactors and associated fuel cycles assessment and design
On the construction of capacity-achieving lattice Gaussian codes
Alghamdi, Wael Mohammed Abdullah
2016-08-15
In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.
On the construction of capacity-achieving lattice Gaussian codes
Alghamdi, Wael; Abediseid, Walid; Alouini, Mohamed-Slim
2016-01-01
In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.
Elsawy, Hesham; Hossain, Ekram; Camorlinga, Sergio
2014-01-01
of the intensity of the admitted networks decreases with the number of channels. By using graph theory, we obtain the minimum required number of channels to accommodate a certain intensity of coexisting networks under a self admission failure probability constraint
Empirical Evaluation of Superposition Coded Multicasting for Scalable Video
Chun Pong Lau
2013-03-01
In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.
DEFF Research Database (Denmark)
Hägglund, Per; Bunkenborg, Jakob; Maeda, Kenji
2008-01-01
Thioredoxin (Trx) is a ubiquitous protein disulfide reductase involved in a wide range of cellular redox processes. A large number of putative target proteins have been identified using proteomics approaches, but insight into target specificity at the molecular level is lacking since the reactivity...... of Trx toward individual disulfides has not been quantified. Here, a novel proteomics procedure is described for quantification of Trx-mediated target disulfide reduction based on thiol-specific differential labeling with the iodoacetamide-based isotope-coded affinity tag (ICAT) reagents. Briefly......, protein extract of embryos from germinated barley seeds was treated +/- Trx, and thiols released from target protein disulfides were irreversibly blocked with iodoacetamide. The remaining cysteine residues in the Trx-treated and the control (-Trx) samples were then chemically reduced and labeled...
A linearization of quantum channels
Crowder, Tanner
2015-06-01
Because the quantum channels form a compact, convex set, we can express any quantum channel as a convex combination of extremal channels. We give a Euclidean representation for the channels whose inverses are also valid channels; these are a subset of the extreme points. They form a compact, connected Lie group, and we calculate its Lie algebra. Lastly, we calculate a maximal torus for the group and provide a constructive approach to decomposing any invertible channel into a product of elementary channels.
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
Energy Technology Data Exchange (ETDEWEB)
Dias, Mafalda; Frazer, Jonathan [Theory Group, Deutsches Elektronen-Synchrotron, DESY, D-22603, Hamburg (Germany); Mulryne, David J. [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS (United Kingdom); Seery, David, E-mail: mafalda.dias@desy.de, E-mail: jonathan.frazer@desy.de, E-mail: d.mulryne@qmul.ac.uk, E-mail: D.Seery@sussex.ac.uk [Astronomy Centre, University of Sussex, Falmer, Brighton BN1 9QH (United Kingdom)
2016-12-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with | f {sub NL}| of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
Numerical evaluation of the bispectrum in multiple field inflation—the transport approach with code
International Nuclear Information System (INIS)
Dias, Mafalda; Frazer, Jonathan; Mulryne, David J.; Seery, David
2016-01-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with | f NL | of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
Numerical evaluation of the bispectrum in multiple field inflation. The transport approach with code
International Nuclear Information System (INIS)
Dias, Mafalda; Mulryne, David J.; Seery, David
2016-09-01
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with vertical stroke f_N_L vertical stroke of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
Numerical evaluation of the bispectrum in multiple field inflation. The transport approach with code
Energy Technology Data Exchange (ETDEWEB)
Dias, Mafalda [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Sussex Univ., Brighton (United Kingdom). Astronomy Centre; Frazer, Jonathan [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Basque Country Univ., Bilbao (Spain). Dept. of Theoretical Physics; IKERBASQUE, Basque Foundation for Science, Bilbao (Spain); Mulryne, David J. [Queen Mary Univ., London (United Kingdom). School of Physics and Astronomy; Seery, David [Sussex Univ., Brighton (United Kingdom). Astronomy Centre
2016-09-15
We present a complete framework for numerical calculation of the power spectrum and bispectrum in canonical inflation with an arbitrary number of light or heavy fields. Our method includes all relevant effects at tree-level in the loop expansion, including (i) interference between growing and decaying modes near horizon exit; (ii) correlation and coupling between species near horizon exit and on superhorizon scales; (iii) contributions from mass terms; and (iv) all contributions from coupling to gravity. We track the evolution of each correlation function from the vacuum state through horizon exit and the superhorizon regime, with no need to match quantum and classical parts of the calculation; when integrated, our approach corresponds exactly with the tree-level Schwinger or 'in-in' formulation of quantum field theory. In this paper we give the equations necessary to evolve all two- and three-point correlation functions together with suitable initial conditions. The final formalism is suitable to compute the amplitude, shape, and scale dependence of the bispectrum in models with vertical stroke f{sub NL} vertical stroke of order unity or less, which are a target for future galaxy surveys such as Euclid, DESI and LSST. As an illustration we apply our framework to a number of examples, obtaining quantitatively accurate predictions for their bispectra for the first time. Two accompanying reports describe publicly-available software packages that implement the method.
A dynamical systems approach to characterizing the contribution of neurogenesis to neural coding
Directory of Open Access Journals (Sweden)
Merav Stern
2014-03-01
index reaches a maximum when young neurons having hyper-excitability ratio of ~4 comprised ~2% of the population. This agrees with experimental estimates (Cameron and McKay, 2001; Deng et al., 2010; Spalding et al., 2013; Tashiro et al., 2007 without any adjustable parameters in the model. Figure 2. Computational analysis of distributed coding in heterogeneous networks. Networks can be efficiently trained only in the regime where prior to training to reproduce target input patterns they exhibit chaotic dynamics (Sussillo and Abbott, 2009. For a model network based on one type of rate neurons (Sompolinsky et al., 1988, the transition to chaotic dynamics occurs when at least some modes in the network respond to perturbation with exponents (eigenvalues that have real parts > 1 (purple line. Imaginary parts indicate oscillatory dynamics along the respective modes, and are not relevant indicators of chaotic dynamics. Our analytic estimate for the limits of exponents (blue circle matches the numerical simulation (small open circles, each circle is a separate mode. In contrast, predictions based on average synaptic weights (red circle are not accurate. This example network is in the chaotic regime prior to training, because some modes have exponents with real parts > 1. The corresponding neural responses over time from different types of neurons (group 1 and group 2 are shown on the right.
An Optimal Linear Coding for Index Coding Problem
Pezeshkpour, Pouya
2015-01-01
An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...
Numerical approach of multi-field two-phase flow models in the OVAP code
International Nuclear Information System (INIS)
Anela Kumbaro
2005-01-01
Full text of publication follows: A significant progress has been made in modeling the complexity of vapor-liquid two-phase flow. Different three-dimensional models exist in order to simulate the evolution of parameters which characterize a two-phase model. These models can be classified into various groups depending on the inter-field coupling. A hierarchy of increasing physical complexity can be defined. The simplest group corresponds to the homogeneous mixture models where no interactions are taken into account. Another group is constituted by the two-fluid models employing physically important interfacial forces between two-phases, liquid, and water. The last group is multi-field modeling where inter-field couplings can be taken into account at different degrees, such as the MUltiple Size Group modeling [2], the consideration of separate equations for the transport and generation of mass and momentum for each field under the assumption of the same energy for all the fields of the same phase, and a full multi-field two-phase model [1]. The numerical approach of the general three-dimensional two-phase flow is by complexity of the phenomena a very challenging task; the ideal numerical method should be at the same time simple in order to apply to any model, from equilibrium to multi-field model and conservative in order to respect the fundamental conservation physical laws. The approximate Riemann solvers have the good properties of conservation of mass, momentum and energy balance and have been extended successfully to two-fluid models [3]- [5]. But, the up-winding of the flux is based on the Eigen-decomposition of the two-phase flow model and the computation of the Eigen-structure of a multi-field model can be a high cost procedure. Our contribution will present a short review of the above two-phase models, and show numerical results obtained for some of them with an approximate Riemann solver and with lower-complexity alternative numerical methods that do not
Simpson, Timothy J.
Paivio's Dual Coding Theory has received widespread recognition for its connection between visual and aural channels of internal information processing. The use of only two channels, however, cannot satisfactorily explain the effects witnessed every day. This paper presents a study suggesting the presence a third, kinesthetic channel, currently…
van Tulder, R.; van Wijk, J.; Kolk, A.
2009-01-01
This article examines whether the involvement of stakeholders in the design of corporate codes of conduct leads to a higher implementation likelihood of the code. The empirical focus is on Occupational Safety and Health (OSH). The article compares the inclusion of OSH issues in the codes of conduct
Directory of Open Access Journals (Sweden)
Maria Christina Georgiadou
2014-09-01
Full Text Available Under the label “future-proofing”, this paper examines the temporal component of sustainable construction as an unexplored, yet fundamental ingredient in the delivery of low-energy domestic buildings. The overarching aim is to explore the integration of future-proofed design approaches into current mainstream construction practice in the UK, focusing on the example of the Code for Sustainable Homes (CSH tool. Regulation has been the most significant driver for achieving the 2016 zero-carbon target; however, there is a gap between the appeal for future-proofing and the lack of effective implementation by building professionals. Even though the CSH was introduced as the leading tool to drive the “step-change” required for achieving zero-carbon new homes by 2016 and the single national standard to encourage energy performance beyond current statutory minima, it lacks assessment criteria that explicitly promote a futures perspective. Based on an established conceptual model of future-proofing, 14 interviews with building practitioners in the UK were conducted to identify the “feasible” and “reasonably feasible” future-proofed design approaches with the potential to enhance the “Energy and CO2 Emissions” category of the CSH. The findings are categorised under three key aspects; namely: coverage of sustainability issues; adopting lifecycle thinking; and accommodating risks and uncertainties and seek to inform industry practice and policy-making in relation to building energy performance.
Migliorati, M
2015-01-01
The simulation of beam dynamics in presence of collective effects requires a strong computational effort to take into account, in a self consistent way, the wakefield acting on a given charge and produced by all the others. Generally this is done by means of a convolution integral or sum. Moreover, if the electromagnetic fields consist of resonant modes with high quality factors, responsible, for example, of coupled bunch instabilities, a charge is also affected by itself in previous turns, and a very long record of wakefield must be properly taken into account. In this paper we present a new simulation code for the longitudinal beam dynamics in a circular accelerator, which exploits an alternative approach to the currently used convolution sum, reducing the computing time and avoiding the issues related to the length of wakefield for coupled bunch instabilities. With this approach it is possible to simulate, without the need of a large computing power, simultaneously, the single and multi-bunch beam dynamics...
Treatment of isomers in nucleosynthesis codes
Reifarth, René; Fiebiger, Stefan; Göbel, Kathrin; Heftrich, Tanja; Kausch, Tanja; Köppchen, Christoph; Kurtulgil, Deniz; Langer, Christoph; Thomas, Benedikt; Weigand, Mario
2018-03-01
The decay properties of long-lived excited states (isomers) can have a significant impact on the destruction channels of isotopes under stellar conditions. In sufficiently hot environments, the population of isomers can be altered via thermal excitation or de-excitation. If the corresponding lifetimes are of the same order of magnitude as the typical time scales of the environment, the isomers have to be treated explicitly. We present a general approach to the treatment of isomers in stellar nucleosynthesis codes and discuss a few illustrative examples. The corresponding code is available online at http://exp-astro.de/isomers/.
Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm
Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar
2017-01-01
For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...
International Nuclear Information System (INIS)
Mohd Faiz Salim; Ridha Roslan; Mohd Rizal Mamat
2013-01-01
Full-text: Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBIMOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges. (author)
International Nuclear Information System (INIS)
Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat
2014-01-01
Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges
Energy Technology Data Exchange (ETDEWEB)
Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my [Nuclear Energy Department, Tenaga Nasional Berhad, Level 32, Dua Sentral, 50470 Kuala Lumpur (Malaysia); Roslan, Ridha [Nuclear Installation Division, Atomic Energy Licensing Board, Batu 24, Jalan Dengkil, 43800 Dengkil, Selangor (Malaysia); Ibrahim, Mohd Rizal Mamat [Technical Support Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor (Malaysia)
2014-02-12
Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.
Error-correction coding for digital communications
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
NORTICA - a new code for cyclotron analysis
International Nuclear Information System (INIS)
Gorelov, D.; Johnson, D.; Marti, F.
2001-01-01
The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state
A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions
Directory of Open Access Journals (Sweden)
Marco Baldi
2008-01-01
Full Text Available Classic linear block codes, like Bose-Chaudhuri-Hocquenghem (BCH and Reed-Solomon (RS codes, are widely used in multimedia transmissions, but their soft-decision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to low-density parity-check (LDPC codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVB-S2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Elsawy, Hesham
2014-07-01
For networks with random topologies (e.g., wireless ad-hoc and sensor networks) and dynamically varying channel gains, choosing the long term operating parameters that optimize the network performance metrics is very challenging. In this paper, we use stochastic geometry analysis to develop a novel framework to design spectrum-efficient multi-channel random wireless networks based on the IEEE 802.15.4 standard. The proposed framework maximizes both spatial and time domain frequency utilization under channel gain uncertainties to minimize the number of frequency channels required to accommodate a certain population of coexisting IEEE 802.15.4 networks. The performance metrics are the outage probability and the self admission failure probability. We relax the single channel assumption that has been used traditionally in the stochastic geometry analysis. We show that the intensity of the admitted networks does not increase linearly with the number of channels and the rate of increase of the intensity of the admitted networks decreases with the number of channels. By using graph theory, we obtain the minimum required number of channels to accommodate a certain intensity of coexisting networks under a self admission failure probability constraint. To this end, we design a superframe structure for the coexisting IEEE 802.15.4 networks and a method for time-domain interference alignment. © 2002-2012 IEEE.
van den Boer, Yvon; Pieterson, Willem; Arendsen, Rex; van Dijk, Jan
With a growing number of available communication channels and the increasing role of other information sources, organizations are urged to rethink their service strategies. Most theories are limited to a one-dimensional focus on source or channel choice and do not fit into today's networked
International Nuclear Information System (INIS)
Hoeld, Alois
2007-01-01
A complete and detailed description of the theoretical background of an '(1D) thermal-hydraulic drift-flux based mixture-fluid' coolant channel model and its resulting module CCM will be presented. The objective of this module is to simulate as universally as possible the steady state and transient behaviour of the key characteristic parameters of a single- or two-phase fluid flowing within any type of heated or non-heated coolant channel. Due to the possibility that different flow regimes can appear along any channel, such a 'basic (BC)' 1D channel is assumed to be subdivided into a number of corresponding sub-channels (SC-s). Each SC can belong to only two types of flow regime, an SC with just a single-phase fluid, containing exclusively either sub-cooled water or superheated steam, or an SC with a two-phase mixture flow. After an appropriate nodalisation of such a BC (and therefore also its SC-s) a 'modified finite volume method' has been applied for the spatial discretisation of the partial differential equations (PDE-s) which represent the basic conservation equations of thermal-hydraulics. Special attention had to be given to the possibility of variable SC entrance or outlet positions (which describe boiling boundaries or mixture levels) and thus the fact that an SC can even disappear or be created anew. The procedure yields for each SC type (and thus the entire BC), a set of non-linear ordinary 1st order differential equations (ODE-s). To link the resulting mean nodal with the nodal boundary function values, both of which are present in the discretised differential equations, a special quadratic polygon approximation procedure (PAX) had to be constructed. Together with the very thoroughly tested packages for drift-flux, heat transfer and single- and two-phase friction factors this procedure represents the central part of the here presented 'Separate-Region' approach, a theoretical model which provides the basis to the very effective working code package CCM
Energy Technology Data Exchange (ETDEWEB)
Leray, J.L.; Paillet, Ph.; Ferlet-Cavrois, V. [CEA Bruyeres le Chatel DRIF, 91 (France); Tavernier, C.; Belhaddad, K. [ISE Integrated System Engineering AG (Switzerland); Penzin, O. [ISE Integrated System Engineering Inc., San Jose (United States)
1999-07-01
A new 2-D and 3-D self-consistent code has been developed and is applied to understanding the charge trapping in SOI buried oxide causing back-channel MOS leakage in SOI transistors. Clear indications on scaling trends are obtained with respect to supply voltage and oxide thickness. (authors)
International Nuclear Information System (INIS)
Mays, G.T.
1989-04-01
The US Nuclear Regulatory Commission (NRC) has recognized the importance of the collection, assessment, and feedstock of operating experience data from commercial nuclear power plants and has centralized these activities in the Office for Analysis and Evaluation of Operational Data (AEOD). Such data is essential for performing safety and reliability analyses, especially analyses of trends and patterns to identify undesirable changes in plant performance at the earliest opportunity to implement corrective measures to preclude the occurrences of a more serious event. One of NRC's principal tools for collecting and evaluating operating experience data is the Sequence Coding and Search System (SCSS). The SCSS consists of a methodology for structuring event sequences and the requisite computer system to store and search the data. The source information for SCSS is the Licensee Event Report (LER), which is a legally required document. This paper describes the objective SCSS, the information it contains, and the format and approach for constructuring SCSS event sequences. Examples are presented demonstrating the use SCSS to support the analysis of LER data. The SCSS contains over 30,000 LERs describing events from 1980 through the present. Insights gained from working with a complex data system from the initial developmental stage to the point of a mature operating system are highlighted
Energy Technology Data Exchange (ETDEWEB)
Akcay, Cihan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Haut, Terry Scot [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Neil N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-21
The EM module of the Truchas code currently lacks the capability to model the Joule (Ohmic) heating of highly conducting materials that are inserted into induction furnaces from time to time to change the heating profile. This effect is difficult to simulate directly because of the requirement to resolve the extremely thin skin depth of good conductors, which is computationally costly. For example, copper has a skin depth, δ ~ 1 mm, for an oscillation frequency of tens of kHz. The industry is interested in determining what fraction of the heating power is lost to the Joule heating of these good conductors inserted inside the furnaces. The approach presented in this document is one of asymptotics where the leading order (unperturbed) solution is taken as that which emerges from solving the EM problem for a perfectly conducting insert. The conductor is treated as a boundary of the domain. The perturbative correction enters as a series expansion in terms of the dimensionless skin depth δ/L, where L is the characteristic size of the EM system. The correction at each order depends on the previous. This means that the leading order correction only depends on the unperturbed solution, in other words, it does not require Truchas to perform an additional EM field solve. Thus, the Joule heating can be captured by a clever leveraging of the existing tools in Truchas with only slight modifications.
International Nuclear Information System (INIS)
Mays, G.T.
1990-01-01
The U.S. Nuclear Regulatory Commission (NRC) has recognized the importance of the collection, assessment, and feedback of operating experience data from commercial nuclear power plants and has centralized these activities in the Office for Analysis and Evaluation of Operational Data (AEOD). Such data is essential for performing safety and reliability analyses, especially analyses of trends and patterns to identify undesirable changes in plant performance at the earliest opportunity to implement corrective measures to preclude the occurrence of a more serious event. One of NRC's principal tools for collecting and evaluating operating experience data is the Sequence Coding and Search System (SCSS). The SCSS consists of a methodology for structuring event sequences and the requisite computer system to store and search the data. The source information for SCSS is the Licensee Event Report (LER), which is a legally required document. This paper describes the objectives of SCSS, the information it contains, and the format and approach for constructing SCSS event sequences. Examples are presented demonstrating the use of SCSS to support the analysis of LER data. The SCSS contains over 30,000 LERs describing events from 1980 through the present. Insights gained from working with a complex data system from the initial developmental stage to the point of a mature operating system are highlighted. Considerable experience has been gained in the areas of evolving and changing data requirements, staffing requirements, and quality control and quality assurance procedures for addressing consistency, software/hardware considerations for developing and maintaining a complex system, documentation requirements, and end-user needs. Two other approaches for constructing and evaluating event sequences are examined including the Accident Precursor Program (ASP) where sequences having the potential for core damage are identified and analyzed, and the Significant Event Compilation Tree
Iterative optimization of quantum error correcting codes
International Nuclear Information System (INIS)
Reimpell, M.; Werner, R.F.
2005-01-01
We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step
International Nuclear Information System (INIS)
Hodgdon, M.L.; Oona, H.; Martinez, A.R.; Salon, S.; Wendling, P.; Krahenbuhl, L.; Nicolas, A.; Nicolas, L.
1990-01-01
The authors present the results of three electromagnetic field problems for compressed magnetic field generators and their associated power flow channels. The first problem is the computation of the transient magnetic field in a two-dimensional model of a helical generator during loading. The second problem is the three-dimensional eddy current patterns in a section of an armature beneath a bifurcation point of a helical winding. The authors' third problem is the calculation of the three-dimensional electrostatic fields in a region known as the post-hole convolute in which a rod connects the inner and outer walls of a system of three concentric cylinders through a hole in the middle cylinder. While analytic solutions exist for many electromagnetic filed problems in cases of special and ideal geometries, the solution of these and similar problems for the proper analysis and design of compressed magnetic field generators and their related hardware require computer simulations
Matching Dyadic Distributions to Channels
Böcherer, Georg; Mathar, Rudolf
2010-01-01
Many communication channels with discrete input have non-uniform capacity achieving probability mass functions (PMF). By parsing a stream of independent and equiprobable bits according to a full prefix-free code, a modu-lator can generate dyadic PMFs at the channel input. In this work, we show that for discrete memoryless channels and for memoryless discrete noiseless channels, searching for good dyadic input PMFs is equivalent to minimizing the Kullback-Leibler distance between a dyadic PMF ...
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-08-01
Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.
Yadav, Rajeev; Lu, H Peter
2018-03-28
The N-methyl-d-aspartate (NMDA) receptor ion-channel is activated by the binding of ligands, along with the application of action potential, important for synaptic transmission and memory functions. Despite substantial knowledge of the structure and function, the gating mechanism of the NMDA receptor ion channel for electric on-off signals is still a topic of debate. We investigate the NMDA receptor partition distribution and the associated channel's open-close electric signal trajectories using a combined approach of correlating single-molecule fluorescence photo-bleaching, single-molecule super-resolution imaging, and single-channel electric patch-clamp recording. Identifying the compositions of NMDA receptors, their spatial organization and distributions over live cell membranes, we observe that NMDA receptors are organized inhomogeneously: nearly half of the receptor proteins are individually dispersed; whereas others exist in heterogeneous clusters of around 50 nm in size as well as co-localized within the diffraction limited imaging area. We demonstrate that inhomogeneous interactions and partitions of the NMDA receptors can be a cause of the heterogeneous gating mechanism of NMDA receptors in living cells. Furthermore, comparing the imaging results with the ion-channel electric current recording, we propose that the clustered NMDA receptors may be responsible for the variation in the current amplitude observed in the on-off two-state ion-channel electric signal trajectories. Our findings shed new light on the fundamental structure-function mechanism of NMDA receptors and present a conceptual advancement of the ion-channel mechanism in living cells.
Information transfer through quantum channels
International Nuclear Information System (INIS)
Kretschmann, D.
2007-01-01
all known coding theorems can be generalized from memoryless channels to forgetful memory channels. We also present examples for non-forgetful channels, and derive generic entropic upper bounds on their capacities for (private) classical and quantum information transfer. Ch. 7 provides a brief introduction to quantum information spectrum methods as a promising approach to coding theorems for completely general quantum sources and channels. We present a data compression theorem for general quantum sources and apply these results to ergodic as well as mixed sources. Finally we investigate the continuity of distillable entanglement - another key notion of the field, which characterizes the optimal asymptotic rate at which maximally entangled states can be generated from many copies of a less entangled state. We derive uniform norm bounds for all states with full support, and we extend some of these results to quantum channel capacities. (orig.)
Information transfer through quantum channels
Energy Technology Data Exchange (ETDEWEB)
Kretschmann, D.
2007-03-12
channel. We then explain how all known coding theorems can be generalized from memoryless channels to forgetful memory channels. We also present examples for non-forgetful channels, and derive generic entropic upper bounds on their capacities for (private) classical and quantum information transfer. Ch. 7 provides a brief introduction to quantum information spectrum methods as a promising approach to coding theorems for completely general quantum sources and channels. We present a data compression theorem for general quantum sources and apply these results to ergodic as well as mixed sources. Finally we investigate the continuity of distillable entanglement - another key notion of the field, which characterizes the optimal asymptotic rate at which maximally entangled states can be generated from many copies of a less entangled state. We derive uniform norm bounds for all states with full support, and we extend some of these results to quantum channel capacities. (orig.)
A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok
2001-01-01
textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from
International Nuclear Information System (INIS)
Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi
2009-01-01
Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)
Visual communication with retinex coding.
Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R
2000-04-10
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
Visual Communication with Retinex Coding
Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel
2000-04-01
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
Directory of Open Access Journals (Sweden)
Puett Robin C
2009-10-01
Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies
The major cost in aquaculture production systems is feed, and the use of biotechnology approaches to identify fishes with superior feed efficiency (FE) may have a positive influence on profitability. There been little use of genetically based technologies to assess FE in culture fishes. Mitochondria...
DEFF Research Database (Denmark)
Soon, Winnie
2014-01-01
This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...
Topics in quantum cryptography, quantum error correction, and channel simulation
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Lyngstad, Pål
2016-01-01
Deregulation is on the political agenda in the European countries. The Norwegian building code related to universal design and accessibility is challenged. To meet this, the Norwegian Building Authority have chosen to examine established truths and are basing their revised code on scientific research and field tests. But will this knowledge-based deregulation comply within the framework of the anti-discrimination act and, and if not: who suffers and to what extent?
Performance of Turbo Interference Cancellation Receivers in Space-Time Block Coded DS-CDMA Systems
Directory of Open Access Journals (Sweden)
Emmanuel Oluremi Bejide
2008-07-01
Full Text Available We investigate the performance of turbo interference cancellation receivers in the space time block coded (STBC direct-sequence code division multiple access (DS-CDMA system. Depending on the concatenation scheme used, we divide these receivers into the partitioned approach (PA and the iterative approach (IA receivers. The performance of both the PA and IA receivers is evaluated in Rayleigh fading channels for the uplink scenario. Numerical results show that the MMSE front-end turbo space-time iterative approach receiver (IA effectively combats the mixture of MAI and intersymbol interference (ISI. To further investigate the possible achievable data rates in the turbo interference cancellation receivers, we introduce the puncturing of the turbo code through the use of rate compatible punctured turbo codes (RCPTCs. Simulation results suggest that combining interference cancellation, turbo decoding, STBC, and RCPTC can significantly improve the achievable data rates for a synchronous DS-CDMA system for the uplink in Rayleigh flat fading channels.
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...
Energy Technology Data Exchange (ETDEWEB)
Irie, I [Kyushu Univ., Fukuoka (Japan). Faculty of Engineering; Murakami, K; Tsuruya, H [Ministry of Transportation, Tokyo (Japan). Port and Harbour Research Inst.
1991-11-20
The phenomena that clay or mud is carried away by waves or currents and deposited in approach channels and harbors are called siltation, and hinder often seriously the navigation of vessels and their arrival at as well as departure from wharves, etc.. In this paper, the hydraulic mechanism of siltation in harbors and approach channels in the sea area is chozen in particular, and waves and currents as the external force governing the travel of bottom mud, the properties of sunken mud, the supply source of sunken mud in approach channels, and grasping of the mud sinking mechanism as well as countermeasures against mud sinking are stated mainly centering around the results obtained from the in situ observations at Kumamoto Port and Banjarmasin Port and their mathematical calculations. The bottom mud traveling mechanism has been accepted as a study subject respectively from such wide viewpoints as river engineering, agriculture, environmental engineering, sanitary engineering, chemical engineering and mechanical engineering, and in addition, it has been under study by coastal engineering. Siltation under the wave actions is still in the state of research even in advanced countries in America and Europe. The siltation research in Japan has a short history, but this is the field which must be coped with positively. 19 refs., 17 figs.
International Nuclear Information System (INIS)
Kawasaki, Nobuchika; Asayama, Tai
2001-09-01
Both reliability and safety have to be further improved for the successful commercialization of FBRs. At the same time, construction and operation costs need to be reduced to a same level of future LWRs. To realize compatibility among reliability, safety and, cost, the Structural Mechanics Research Group in JNC started the development of System Based Code for Integrity of FBR. This code extends the present structural design standard to include the areas of fabrication, installation, plant system design, safety design, operation and maintenance, and so on. A quantitative index is necessary to connect different partial standards in this code. Failure probability is considered as a candidate index. Therefore we decided to make a model calculation using failure probability and judge its applicability. We first investigated other probabilistic standards like ASME Code Case N-578. A probabilistic approach in the structural integrity evaluation was created based on these results, and also an evaluation flow was proposed. According to this flow, a model calculation of creep-fatigue damage was performed. This trial calculation was for a vessel in a sodium-cooled FBR. As the result of this model calculation, a crack initiation probability and a crack penetration probability were found to be effective indices. Last we discussed merits of this System Based Code, which are presented in this report. Furthermore, this report presents future development tasks. (author)
Al-Badarneh, Yazan Hussein
2018-01-25
We consider a general selection-diversity (SD) scheme in which the k-th best link is selected from a number of links. We use extreme value theory (EVT) to derive simple closed-form asymptotic expressions for the average throughput, effective throughput and average bit error probability (BEP) for the k-th best link over various channel models that are widely used to characterize fading in wireless communication systems. As an application example, we consider the Weibull fading channel model and verify the accuracy of the derived asymptotic expressions through Monte Carlo simulations.
Al-Badarneh, Yazan Hussein; Georghiades, Costas; Alouini, Mohamed-Slim
2018-01-01
We consider a general selection-diversity (SD) scheme in which the k-th best link is selected from a number of links. We use extreme value theory (EVT) to derive simple closed-form asymptotic expressions for the average throughput, effective throughput and average bit error probability (BEP) for the k-th best link over various channel models that are widely used to characterize fading in wireless communication systems. As an application example, we consider the Weibull fading channel model and verify the accuracy of the derived asymptotic expressions through Monte Carlo simulations.
Ivanov, A. V.; Reva, I. L.; Babin, A. A.
2018-04-01
The article deals with influence of various ways to place vibration transmitters on efficiency of rooms safety for negotiations. Standing for remote vibration listening of window glass, electro-optical channel, the most typical technical channel of information leakage, was investigated. The modern system “Sonata-AB” of 4B model is used as an active protection tool. Factors influencing on security tools configuration efficiency have been determined. The results allow utilizer to reduce masking interference level as well as parasitic noise with keeping properties of room safety.
DEFF Research Database (Denmark)
Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente
2013-01-01
is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...
DEFF Research Database (Denmark)
Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki
2014-01-01
Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...
Coupling of channel thermalhydraulics and fuel behaviour in ACR-1000 safety analyses
International Nuclear Information System (INIS)
Huang, F.L.; Lei, Q.M.; Zhu, W.; Bilanovic, Z.
2008-01-01
Channel thermalhydraulics and fuel thermal-mechanical behaviour are interlinked. This paper describes a channel thermalhydraulics and fuel behaviour coupling methodology that has been used in ACR-1000 safety analyses. The coupling is done for all 12 fuel bundles in a fuel channel using the channel thermalhydraulics code CATHENA MOD-3.5d/Rev 2 and the transient fuel behaviour code ELOCA 2.2. The coupling approach can be used for every fuel element or every group of fuel elements in the channel. Test cases are presented where a total of 108 fuel element models are set up to allow a full coupling between channel thermalhydraulics and detailed fuel analysis for a channel containing a string of 12 fuel bundles. An additional advantage of this coupling approach is that there is no need for a separate detailed fuel analysis because the coupling analysis, once done, provides detailed calculations for the fuel channel (fuel bundles, pressure tube, and calandria tube) as well as all the fuel elements (or element groups) in the channel. (author)
Rate-adaptive BCH codes for distributed source coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren
2013-01-01
This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...