LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources
Directory of Open Access Journals (Sweden)
Javier Garcia-Frias
2005-05-01
Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.
Joint source-channel coding using variable length codes
Balakirsky, V.B.
2001-01-01
We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant
Iterative List Decoding of Concatenated Source-Channel Codes
Directory of Open Access Journals (Sweden)
Hedayat Ahmadreza
2005-01-01
Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.
Joint source/channel coding of scalable video over noisy channels
Energy Technology Data Exchange (ETDEWEB)
Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)
1997-01-01
We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}
Towards Holography via Quantum Source-Channel Codes
Pastawski, Fernando; Eisert, Jens; Wilming, Henrik
2017-07-01
While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.
Multi-rate control over AWGN channels via analog joint source-channel coding
Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak
2017-01-01
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.
Multi-rate control over AWGN channels via analog joint source-channel coding
Khina, Anatoly
2017-01-05
We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.
Optimization of Coding of AR Sources for Transmission Across Channels with Loss
DEFF Research Database (Denmark)
Arildsen, Thomas
Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
Directory of Open Access Journals (Sweden)
Marinkovic Slavica
2006-01-01
Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Directory of Open Access Journals (Sweden)
Ser Javier Del
2005-01-01
Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Fossorier Marc
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Marc Fossorier
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
Directory of Open Access Journals (Sweden)
Pierre Siohan
2005-05-01
Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey
Guillemot, Christine; Siohan, Pierre
2005-12-01
Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.
Low complexity source and channel coding for mm-wave hybrid fiber-wireless links
DEFF Research Database (Denmark)
Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan
2014-01-01
We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...
Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2011-01-01
In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.
Djordjevic, Ivan; Vasic, Bane
2010-01-01
This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.
Rate-adaptive BCH codes for distributed source coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren
2013-01-01
This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Optimal super dense coding over memory channels
Shadman, Zahra; Kampermann, Hermann; Macchiavello, Chiara; Bruß, Dagmar
2011-01-01
We study the super dense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and non-unitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The super dense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where non-unitary encoding leads to an improvement in the super dense coding capacity.
Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.
2001-01-01
Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.
Telemetry advances in data compression and channel coding
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Adaptive Combined Source and Channel Decoding with Modulation ...
African Journals Online (AJOL)
In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...
Whether and Where to Code in the Wireless Relay Channel
DEFF Research Database (Denmark)
Shi, Xiaomeng; Médard, Muriel; Roetter, Daniel Enrique Lucani
2013-01-01
The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control...... the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half......-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception...
Adaptive RAC codes employing statistical channel evaluation ...
African Journals Online (AJOL)
An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...
Protograph LDPC Codes Over Burst Erasure Channels
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Image content authentication based on channel coding
Zhang, Fan; Xu, Lei
2008-03-01
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Kashyap Manohar
2008-01-01
Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Chris Winstead
2008-04-01
Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Bidirectional Fano Algorithm for Lattice Coded MIMO Channels
Al-Quwaiee, Hessa
2013-01-01
channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered
Channel coding techniques for wireless communications
Deergha Rao, K
2015-01-01
The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...
Optimal Codes for the Burst Erasure Channel
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure
Radio frequency channel coding made easy
Faruque, Saleh
2016-01-01
This book introduces Radio Frequency Channel Coding to a broad audience. The author blends theory and practice to bring readers up-to-date in key concepts, underlying principles and practical applications of wireless communications. The presentation is designed to be easily accessible, minimizing mathematics and maximizing visuals.
Distributed source coding of video
DEFF Research Database (Denmark)
Forchhammer, Søren; Van Luong, Huynh
2015-01-01
A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...
Protograph LDPC Codes for the Erasure Channel
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
New Channel Coding Methods for Satellite Communication
Directory of Open Access Journals (Sweden)
J. Sebesta
2010-04-01
Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.
LDPC Code Design for Nonuniform Power-Line Channels
Directory of Open Access Journals (Sweden)
Sanaei Ali
2007-01-01
Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.
Investigation Of Information Sources And Communication Channels ...
African Journals Online (AJOL)
Investigation Of Information Sources And Communication Channels In Ipm Rice ... the information accessibility of farmer groups seems as empowerment strategy. ... information sources and communication channels, in order of importance, ...
Channel coding in the space station data system network
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Subchannel analysis code development for CANDU fuel channel
International Nuclear Information System (INIS)
Park, J. H.; Suk, H. C.; Jun, J. S.; Oh, D. J.; Hwang, D. H.; Yoo, Y. J.
1998-07-01
Since there are several subchannel codes such as COBRA and TORC codes for a PWR fuel channel but not for a CANDU fuel channel in our country, the subchannel analysis code for a CANDU fuel channel was developed for the prediction of flow conditions on the subchannels, for the accurate assessment of the thermal margin, the effect of appendages, and radial/axial power profile of fuel bundles on flow conditions and CHF and so on. In order to develop the subchannel analysis code for a CANDU fuel channel, subchannel analysis methodology and its applicability/pertinence for a fuel channel were reviewed from the CANDU fuel channel point of view. Several thermalhydraulic and numerical models for the subchannel analysis on a CANDU fuel channel were developed. The experimental data of the CANDU fuel channel were collected, analyzed and used for validation of a subchannel analysis code developed in this work. (author). 11 refs., 3 tabs., 50 figs
Multiple Description Coding for Closed Loop Systems over Erasure Channels
DEFF Research Database (Denmark)
Østergaard, Jan; Quevedo, Daniel
2013-01-01
In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller......) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization.We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet...... by showing that the system can be cast as a Markov jump linear system....
Impact of intra-ﬂow network coding on the relay channel performance: an analytical study
Apavatjrut , Anya; Goursaud , Claire; Jaffrès-Runser , Katia; Gorce , Jean-Marie
2012-01-01
International audience; One of the most powerful ways to achieve trans- mission reliability over wireless links is to employ efﬁcient coding techniques. This paper investigates the performance of a transmission over a relay channel where information is protected by two layers of coding. In the ﬁrst layer, transmission reliability is ensured by fountain coding at the source. The second layer incorporates network coding at the relay node. Thus, fountain coded packets are re-encoded at the relay...
Multiple LDPC decoding for distributed source coding and video coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Luong, Huynh Van; Huang, Xin
2011-01-01
Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...
CONIFERS: a neutronics code for reactors with channels
International Nuclear Information System (INIS)
Davis, R.S.
1977-04-01
CONIFERS is a neutronics code for nuclear reactors whose fuel is in channels that are separated from each other by several neutron mean-free-path lengths of moderator. It can treat accurately situations in which the usual homogenized-cell diffusion equation becomes inaccurate, but is more economical than other advanced methods such as response-matrix and source-sink formalisms. CONIFERS uses exact solutions of the neutron diffusion equation within each cell. It allows for the breakdown of this equation near a channel by means of data that almost any cell code can supply. It uses the results of these cell analyses in a reactor equations set that is as readily solvable as the familiar finite-difference equations set. CONIFERS can model almost any configuration of channels and other structures in two or three dimensions. It can use any number of energy groups and any reactivity scales, including scales based on control operations. It is also flexible from a programming point of view, and has convenient input and output provisions. (author)
Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks
Directory of Open Access Journals (Sweden)
Grasielli Barreto
2017-03-01
Full Text Available Underwater acoustic networks (UAN allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC codes and fountain codes (FC for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.
On Predictive Coding for Erasure Channels Using a Kalman Framework
DEFF Research Database (Denmark)
Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang
2009-01-01
We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...... signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...... and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio...
Ripple design of LT codes for AWGN channel
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2012-01-01
In this paper, we present an analytical framework for designing LT codes in additive white Gaussian noise (AWGN) channels. We show that some of analytical results from binary erasure channels (BEC) also hold in AWGN channels with slight modifications. This enables us to apply a ripple-based design...
Turbo coding, turbo equalisation and space-time coding for transmission over fading channels
Hanzo, L; Yeap, B
2002-01-01
Against the backdrop of the emerging 3G wireless personal communications standards and broadband access network standard proposals, this volume covers a range of coding and transmission aspects for transmission over fading wireless channels. It presents the most important classic channel coding issues and also the exciting advances of the last decade, such as turbo coding, turbo equalisation and space-time coding. It endeavours to be the first book with explicit emphasis on channel coding for transmission over wireless channels. Divided into 4 parts: Part 1 - explains the necessary background for novices. It aims to be both an easy reading text book and a deep research monograph. Part 2 - provides detailed coverage of turbo conventional and turbo block coding considering the known decoding algorithms and their performance over Gaussian as well as narrowband and wideband fading channels. Part 3 - comprehensively discusses both space-time block and space-time trellis coding for the first time in literature. Par...
Use of color-coded sleeve shutters accelerates oscillograph channel selection
Bouchlas, T.; Bowden, F. W.
1967-01-01
Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.
Network Coded Cooperation Over Time-Varying Channels
DEFF Research Database (Denmark)
Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Barros, João
2014-01-01
transmissions, e.g., in terms of the rate of packet transmission or the energy consumption. A comprehensive analysis of the MDP solution is carried out under different network conditions to extract optimal rules of packet transmission. Inspired by the extracted rules, we propose two near-optimal heuristics......In this paper, we investigate the optimal design of cooperative network-coded strategies for a three-node wireless network with time-varying, half-duplex erasure channels. To this end, we formulate the problem of minimizing the total cost of transmitting M packets from source to two receivers...... as a Markov Decision Process (MDP). The actions of the MDP model include the source and the type of transmission to be used in a given time slot given perfect knowledge of the system state. The cost of packet transmission is defined such that it can incorporate the difference between broadcast and unicast...
Bilayer Protograph Codes for Half-Duplex Relay Channels
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
Ripple Design of LT Codes for BIAWGN Channels
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
2014-01-01
This paper presents a novel framework, which enables a design of rateless codes for binary input additive white Gaussian noise (BIAWGN) channels, using the ripple-based approach known from the works for the binary erasure channel (BEC). We reveal that several aspects of the analytical results from...
Performance analysis of LDPC codes on OOK terahertz wireless channels
International Nuclear Information System (INIS)
Liu Chun; Wang Chang; Cao Jun-Cheng
2016-01-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. (paper)
Transmission imaging with a coded source
International Nuclear Information System (INIS)
Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.
1976-01-01
The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination
Directory of Open Access Journals (Sweden)
Du Bing
2010-01-01
Full Text Available A recently developed theory suggests that network coding is a generalization of source coding and channel coding and thus yields a significant performance improvement in terms of throughput and spatial diversity. This paper proposes a cooperative design of a parity-check network coding scheme in the context of a two-source multiple access relay channel (MARC model, a common compact model in hierarchical wireless sensor networks (WSNs. The scheme uses Low-Density Parity-Check (LDPC as the surrogate to build up a layered structure which encapsulates the multiple constituent LDPC codes in the source and relay nodes. Specifically, the relay node decodes the messages from two sources, which are used to generate extra parity-check bits by a random network coding procedure to fill up the rate gap between Source-Relay and Source-Destination transmissions. Then, we derived the key algebraic relationships among multidimensional LDPC constituent codes as one of the constraints for code profile optimization. These extra check bits are sent to the destination to realize a cooperative diversity as well as to approach MARC decode-and-forward (DF capacity.
Image transmission system using adaptive joint source and channel decoding
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Present state of the SOURCES computer code
International Nuclear Information System (INIS)
Shores, Erik F.
2002-01-01
In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.
Image authentication using distributed source coding.
Lin, Yao-Chung; Varodayan, David; Girod, Bernd
2012-01-01
We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
Channel modeling, signal processing and coding for perpendicular magnetic recording
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by
Joint opportunistic scheduling and network coding for bidirectional relay channel
Shaqfeh, Mohammad
2013-07-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users\\' transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited. © 2013 IEEE.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Measuring Modularity in Open Source Code Bases
Directory of Open Access Journals (Sweden)
Roberto Milev
2009-03-01
Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.
Channel estimation for physical layer network coding systems
Gao, Feifei; Wang, Gongpu
2014-01-01
This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
Energy Technology Data Exchange (ETDEWEB)
Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)
2007-10-12
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
International Nuclear Information System (INIS)
Alamino, Roberto C; Saad, David
2007-01-01
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases
Code Forking, Governance, and Sustainability in Open Source Software
Juho Lindman; Linus Nyman
2013-01-01
The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...
Gallager error-correcting codes for binary asymmetric channels
International Nuclear Information System (INIS)
Neri, I; Skantzos, N S; Bollé, D
2008-01-01
We derive critical noise levels for Gallager codes on asymmetric channels as a function of the input bias and the temperature. Using a statistical mechanics approach we study the space of codewords and the entropy in the various decoding regimes. We further discuss the relation of the convergence of the message passing algorithm with the endogenous property and complexity, characterizing solutions of recursive equations of distributions for cavity fields
Improved virtual channel noise model for transform domain Wyner-Ziv video coding
DEFF Research Database (Denmark)
Huang, Xin; Forchhammer, Søren
2009-01-01
Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....
Bidirectional Fano Algorithm for Lattice Coded MIMO Channels
Al-Quwaiee, Hessa
2013-05-08
Recently, lattices - a mathematical representation of infinite discrete points in the Euclidean space, have become an effective way to describe and analyze communication systems especially system those that can be modeled as linear Gaussian vector channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered as the Closest Lattice Point Search (CLPS). Since the time lattice codes were introduced to Multiple Input Multiple Output (MIMO) channel, Sphere Decoder (SD) has been an efficient way to implement lattice decoders. Sphere decoder offers the optimal performance at the expense of high decoding complexity especially for low signal-to-noise ratios (SNR) and for high- dimensional systems. On the other hand, linear and non-linear receivers, Minimum Mean Square Error (MMSE), and MMSE Decision-Feedback Equalization (DFE), provide the lowest decoding complexity but unfortunately with poor performance. Several studies works have been conducted in the last years to address the problem of designing low complexity decoders for the MIMO channel that can achieve near optimal performance. It was found that sequential decoders using backward tree search can bridge the gap between SD and MMSE. The sequential decoder provides an interesting performance-complexity trade-off using a bias term. Yet, the sequential decoder still suffers from high complexity for mid-to-high SNR values. In this work, we propose a new algorithm for Bidirectional Fano sequential Decoder (BFD) in order to reduce the mid-to-high SNR complexity. Our algorithm consists of first constructing a unidirectional Sequential Decoder based on forward search using the QL decomposition. After that, BFD incorporates two searches, forward and backward, to work simultaneously till they merge and find the closest lattice point to the
Performance analysis of LDPC codes on OOK terahertz wireless channels
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints
Directory of Open Access Journals (Sweden)
Ghandi MM
2006-01-01
Full Text Available This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC or hierarchical quadrature amplitude modulation (HQAM can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.
A finite range coupled channel Born approximation code
International Nuclear Information System (INIS)
Nagel, P.; Koshel, R.D.
1978-01-01
The computer code OUKID calculates differential cross sections for direct transfer nuclear reactions in which multistep processes, arising from strongly coupled inelastic states in both the target and residual nuclei, are possible. The code is designed for heavy ion reactions where full finite range and recoil effects are important. Distorted wave functions for the elastic and inelastic scattering are calculated by solving sets of coupled differential equations using a Matrix Numerov integration procedure. These wave functions are then expanded into bases of spherical Bessel functions by the plane-wave expansion method. This approach allows the six-dimensional integrals for the transition amplitude to be reduced to products of two one-dimensional integrals. Thus, the inelastic scattering is treated in a coupled channel formalism while the transfer process is treated in a finite range born approximation formalism. (Auth.)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks
DEFF Research Database (Denmark)
Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk
2013-01-01
quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...
Lower Bounds on the Capacity of the Relay Channel with States at the Source
Directory of Open Access Journals (Sweden)
Abdellatif Zaidi
2009-01-01
Full Text Available We consider a state-dependent three-terminal full-duplex relay channel with the channel states noncausally available at only the source, that is, neither at the relay nor at the destination. This model has application to cooperation over certain wireless channels with asymmetric cognition capabilities and cognitive interference relay channels. We establish lower bounds on the channel capacity for both discrete memoryless (DM and Gaussian cases. For the DM case, the coding scheme for the lower bound uses techniques of rate-splitting at the source, decode-and-forward (DF relaying, and a Gel'fand-Pinsker-like binning scheme. In this coding scheme, the relay decodes only partially the information sent by the source. Due to the rate-splitting, this lower bound is better than the one obtained by assuming that the relay decodes all the information from the source, that is, full-DF. For the Gaussian case, we consider channel models in which each of the relay node and the destination node experiences on its link an additive Gaussian outside interference. We first focus on the case in which the links to the relay and to the destination are corrupted by the same interference; and then we focus on the case of independent interferences. We also discuss a model with correlated interferences. For each of the first two models, we establish a lower bound on the channel capacity. The coding schemes for the lower bounds use techniques of dirty paper coding or carbon copying onto dirty paper, interference reduction at the source and decode-and-forward relaying. The results reveal that, by opposition to carbon copying onto dirty paper and its root Costa's initial dirty paper coding (DPC, it may be beneficial in our setup that the informed source uses a part of its power to partially cancel the effect of the interference so that the uninformed relay benefits from this cancellation, and so the source benefits in turn.
An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem
Directory of Open Access Journals (Sweden)
Tu Zhenyu
2005-01-01
Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.
Transmission over UWB channels with OFDM system using LDPC coding
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Research on Primary Shielding Calculation Source Generation Codes
Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun
2017-09-01
Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.
Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System
Directory of Open Access Journals (Sweden)
Raulefs Ronald
2004-01-01
Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.
The Visual Code Navigator : An Interactive Toolset for Source Code Investigation
Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru
2005-01-01
We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a
HYTRAN: hydraulic transient code for investigating channel flow stability
International Nuclear Information System (INIS)
Kao, H.S.; Cardwell, W.R.; Morgan, C.D.
1976-01-01
HYTRAN is an analytical program used to investigate the possibility of hydraulic oscillations occurring in a reactor flow channel. The single channel studied is ordinarily the hot channel in the reactor core, which is parallel to other channels and is assumed to share a constant pressure drop with other channels. Since the channel of highest thermal state is studied, provision is made for two-phase flow that can cause a flow instability in the channel. HYTRAN uses the CHATA(1) program to establish a steady-state condition. A heat flux perturbation is then imposed on the channel, and the flow transient is calculated as a function of time
Source Code Stylometry Improvements in Python
2017-12-14
grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even
Bit rates in audio source coding
Veldhuis, Raymond N.J.
1992-01-01
The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources
DEFF Research Database (Denmark)
Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.
2012-01-01
This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....
Quantum-capacity-approaching codes for the detected-jump channel
International Nuclear Information System (INIS)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng; Zeng Bei
2010-01-01
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasures and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.
Data processing with microcode designed with source coding
McCoy, James A; Morrison, Steven E
2013-05-07
Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.
Repairing business process models as retrieved from source code
Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.
2013-01-01
The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models
Typical performance of regular low-density parity-check codes over general symmetric channels
International Nuclear Information System (INIS)
Tanaka, Toshiyuki; Saad, David
2003-01-01
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models
Typical performance of regular low-density parity-check codes over general symmetric channels
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)
2003-10-31
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
Nuclear fusion ion beam source composed of optimum channel wall
International Nuclear Information System (INIS)
Furukaw, T.
2007-01-01
Full text of publication follows: Numerical and experimental researches of the hall-type beam accelerator was conducted by highlighting both neutral species and material of acceleration channel wall. The hall-type beam accelerator is expected as ion beam source for nuclear fusion since it could product ion beam density over 10 3 times as high as that of electrostatic accelerator, which is used regularly as beam heating device, because it is proven that the beam heating method could accelerate ion to high energy beam by electric field and heat plasma to ultra high temperature of 100 million degrees or more. At high-voltage mode of DC regime that is normal operational condition, however, the various plasma MHD (magneto-hydrodynamic) instabilities are generated. In particular, the large-amplitude and low-frequency plasma MHD instability in the tens of kHz among them has been a serious problem that should be solved to improve the operational stability and the system durability. So, we propose a hall-type beam accelerator with new design concepts; both acquisition of simultaneous solution for reducing the plasma MHD instability and the accelerator core overheating and optimum combination of the acceleration channel wall material. The technologies for this concept are as follows: 1) To increase neutral species velocity-inlet in acceleration channel by preheating propellant through circularly propellant conduit line inside accelerator system could bring about the lower amplitude of the instability. 2) Through this method, the accelerator system is cooled, and the higher thrust and specific-impulse is produced with hardly changing thrust efficiency at the same time. 3) To select BN (Boron- Nitride) and Al 2 O 3 as wall material of ionization- and acceleration-zone in acceleration channel respectively having different secondary-electron emission-coefficient could achieve the higher-efficiency and -durability. The hall-type beam accelerator designed using these technologies
Directory of Open Access Journals (Sweden)
Crespo PedroM
2011-01-01
Full Text Available This paper focuses on the data fusion scenario where nodes sense and transmit the data generated by a source to a common destination, which estimates the original information from more accurately than in the case of a single sensor. This work joins the upsurge of research interest in this topic by addressing the setup where the sensed information is transmitted over a Gaussian Multiple-Access Channel (MAC. We use Low Density Generator Matrix (LDGM codes in order to keep the correlation between the transmitted codewords, which leads to an improved received Signal-to-Noise Ratio (SNR thanks to the constructive signal addition at the receiver front-end. At reception, we propose a joint decoder and estimator that exchanges soft information between the LDGM decoders and a data fusion stage. An error-correcting Bose, Ray-Chaudhuri, Hocquenghem (BCH code is further applied suppress the error floor derived from the ambiguity of the MAC channel when dealing with correlated sources. Simulation results are presented for several values of and diverse LDGM and BCH codes, based on which we conclude that the proposed scheme outperforms significantly (by up to 6.3 dB the suboptimum limit assuming separation between Slepian-Wolf source coding and capacity-achieving channel coding.
BCM-2.0 - The new version of computer code ;Basic Channeling with Mathematica©;
Abdrashitov, S. V.; Bogdanov, O. V.; Korotchenko, K. B.; Pivovarov, Yu. L.; Rozhkova, E. I.; Tukhfatullin, T. A.; Eikhorn, Yu. L.
2017-07-01
The new symbolic-numerical code devoted to investigation of the channeling phenomena in periodic potential of a crystal has been developed. The code has been written in Wolfram Language taking advantage of analytical programming method. Newly developed different packages were successfully applied to simulate scattering, radiation, electron-positron pair production and other effects connected with channeling of relativistic particles in aligned crystal. The result of the simulation has been validated against data from channeling experiments carried out at SAGA LS.
Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel
Directory of Open Access Journals (Sweden)
M. Rezaei
2016-03-01
Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.
A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE
Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel
2004-01-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife$^{\\circledR}$. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3$^{\\rm o}$ with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photons trajectories reaching the out...
Analysis of Coded FHSS Systems with Multiple Access Interference over Generalized Fading Channels
Directory of Open Access Journals (Sweden)
Salam A. Zummo
2009-02-01
Full Text Available We study the effect of interference on the performance of coded FHSS systems. This is achieved by modeling the physical channel in these systems as a block fading channel. In the derivation of the bit error probability over Nakagami fading channels, we use the exact statistics of the multiple access interference (MAI in FHSS systems. Due to the mathematically intractable expression of the Rician distribution, we use the Gaussian approximation to derive the error probability of coded FHSS over Rician fading channel. The effect of pilot-aided channel estimation is studied for Rician fading channels using the Gaussian approximation. From this, the optimal hopping rate in coded FHSS is approximated. Results show that the performance loss due to interference increases as the hopping rate decreases.
The development and application of a sub-channel code in ocean environment
International Nuclear Information System (INIS)
Wu, Pan; Shan, Jianqiang; Xiang, Xiong; Zhang, Bo; Gou, Junli; Zhang, Bin
2016-01-01
Highlights: • A sub-channel code named ATHAS/OE is developed for nuclear reactors in ocean environment. • ATHAS/OE is verified by another modified sub-channel code based on COBRA-IV. • ATHAS/OE is used to analyze thermal hydraulic of a typical SMR in heaving and rolling motion. • Calculation results show that ocean condition affect the thermal hydraulic of a reactor significantly. - Abstract: An upgraded version of ATHAS sub-channel code ATHAS/OE is developed for the investigation of the thermal hydraulic behavior of nuclear reactor core in ocean environment with consideration of heaving and rolling motion effect. The code is verified by another modified sub-channel code based on COBRA-IV and used to analyze the thermal hydraulic characteristics of a typical SMR under heaving and rolling motion condition. The calculation results show that the heaving and rolling motion affect the thermal hydraulic behavior of a reactor significantly.
Jointly Decoded Raptor Codes: Analysis and Design for the BIAWGN Channel
Directory of Open Access Journals (Sweden)
Venkiah Auguste
2009-01-01
Full Text Available Abstract We are interested in the analysis and optimization of Raptor codes under a joint decoding framework, that is, when the precode and the fountain code exchange soft information iteratively. We develop an analytical asymptotic convergence analysis of the joint decoder, derive an optimization method for the design of efficient output degree distributions, and show that the new optimized distributions outperform the existing ones, both at long and moderate lengths. We also show that jointly decoded Raptor codes are robust to channel variation: they perform reasonably well over a wide range of channel capacities. This robustness property was already known for the erasure channel but not for the Gaussian channel. Finally, we discuss some finite length code design issues. Contrary to what is commonly believed, we show by simulations that using a relatively low rate for the precode , we can improve greatly the error floor performance of the Raptor code.
Single channel blind source separation based on ICA feature extraction
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A new technique is proposed to solve the blind source separation (BSS) given only a single channel observation. The basis functions and the density of the coefficients of source signals learned by ICA are used as the prior knowledge. Based on the learned prior information the learning rules of single channel BSS are presented by maximizing the joint log likelihood of the mixed sources to obtain source signals from single observation,in which the posterior density of the given measurements is maximized. The experimental results exhibit a successful separation performance for mixtures of speech and music signals.
The Astrophysics Source Code Library by the numbers
Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein
2018-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.
Space-Time Trellis Coded 8PSK Schemes for Rapid Rayleigh Fading Channels
Directory of Open Access Journals (Sweden)
Salam A. Zummo
2002-05-01
Full Text Available This paper presents the design of 8PSK space-time (ST trellis codes suitable for rapid fading channels. The proposed codes utilize the design criteria of ST codes over rapid fading channels. Two different approaches have been used. The first approach maximizes the symbol-wise Hamming distance (HD between signals leaving from or entering to the same encoderÃ¢Â€Â²s state. In the second approach, set partitioning based on maximizing the sum of squared Euclidean distances (SSED between the ST signals is performed; then, the branch-wise HD is maximized. The proposed codes were simulated over independent and correlated Rayleigh fading channels. Coding gains up to 4 dB have been observed over other ST trellis codes of the same complexity.
Code Forking, Governance, and Sustainability in Open Source Software
Directory of Open Access Journals (Sweden)
Juho Lindman
2013-01-01
Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.
Development and application of sub-channel analysis code based on SCWR core
International Nuclear Information System (INIS)
Fu Shengwei; Xu Zhihong; Yang Yanhua
2011-01-01
The sub-channel analysis code SABER was developed for thermal-hydraulic analysis of supercritical water-cooled reactor (SCWR) fuel assembly. The extended computational cell structure, a new boundary conditions, 3 dimensional heat conduction model and water properties package were implemented in SABER code, which could be used to simulate the thermal fuel assembly of SCWR. To evaluate the applicability of the code, a steady state calculation of the fuel assembly was performed. The results indicate good applicability of the SABER code to simulate the counter-current flow and the heat exchange between coolant and moderator channels. (authors)
Directory of Open Access Journals (Sweden)
David G. Daut
2007-03-01
Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.
Directory of Open Access Journals (Sweden)
Liu Weiliang
2007-01-01
Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.
Low Complexity Bayesian Single Channel Source Separation
DEFF Research Database (Denmark)
Beierholm, Thomas; Pedersen, Brian Dam; Winther, Ole
2004-01-01
can be estimated quite precisely using ML-II, but the estimation is quite sensitive to the accuracy of the priors as opposed to the source separation quality for known mixing coefficients, which is quite insensitive to the accuracy of the priors. Finally, we discuss how to improve our approach while...
Investigation Of Information Sources And Communication Channels ...
African Journals Online (AJOL)
Extension of integrated pest management (IPM) as a component of sustainable agricultural development, involves empowering farmers. Facilitating the information accessibility of farmer groups seems as empowerment strategy. This strategy is based on identification of related patterns, including information sources and ...
Compression and channel-coding algorithms for high-definition television signals
Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.
1990-09-01
In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Directory of Open Access Journals (Sweden)
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2014-01-01
In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...
Channel coding for underwater acoustic single-carrier CDMA communication system
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
DEFF Research Database (Denmark)
Barforooshan, Mohsen; Østergaard, Jan; Stavrou, Fotios
2017-01-01
This paper presents an upper bound on the minimum data rate required to achieve a prescribed closed-loop performance level in networked control systems (NCSs). The considered feedback loop includes a linear time-invariant (LTI) plant with single measurement output and single control input. Moreover......, in this NCS, a causal but otherwise unconstrained feedback system carries out zero-delay variable-rate coding, and control. Between the encoder and decoder, data is exchanged over a rate-limited noiseless digital channel with a known constant time delay. Here we propose a linear source-coding scheme...
Directory of Open Access Journals (Sweden)
Savitha H. M.
2010-09-01
Full Text Available A comparison of the performance of hard and soft-decision turbo coded Orthogonal Frequency Division Multiplexing systems with Quadrature Phase Shift Keying (QPSK and 16-Quadrature Amplitude Modulation (16-QAM is considered in the first section of this paper. The results show that the soft-decision method greatly outperforms the hard-decision method. The complexity of the demapper is reduced with the use of simplified algorithm for 16-QAM demapping. In the later part of the paper, we consider the transmission of data over additive white class A noise (AWAN channel, using turbo coded QPSK and 16-QAM systems. We propose a novel turbo decoding scheme for AWAN channel. Also we compare the performance of turbo coded systems with QPSK and 16-QAM on AWAN channel with two different channel values- one computed as per additive white Gaussian noise (AWGN channel conditions and the other as per AWAN channel conditions. The results show that the use of appropriate channel value in turbo decoding helps to combat the impulsive noise more effectively. The proposed model for AWAN channel exhibits comparable Bit error rate (BER performance as compared to AWGN channel.
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
Blahut-Arimoto algorithm and code design for action-dependent source coding problems
DEFF Research Database (Denmark)
Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar
2013-01-01
The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....
Distributed coding of multiview sparse sources with joint recovery
DEFF Research Database (Denmark)
Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren
2016-01-01
In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....
Study of an hybrid positron source using channeling for CLIC
Dadoun, O; Chehab, R; Poirier, F; Rinolfi, L; Strakhovenko, V; Variola, A; Vivoli, A
2009-01-01
The CLIC study considers the hybrid source using channeling as the baseline for positron production. The hybrid source uses a few GeV electron beam impinging on a crystal tungsten radiator. With the tungsten crystal oriented on its axis it results an intense, relatively low energy photon beam due mainly to channeling radiation. Those photons are then impinging on an amorphous tungsten target producing positrons by e+e− pair creation. In this note the optimization of the positron yield and the peak energy deposition density in the amorphous target are studied according to the distance between the crystal and the amorphous targets, the primary electron energy and the amorphous target thickness.
Sub-channel/system coupled code development and its application to SCWR-FQT loop
International Nuclear Information System (INIS)
Liu, X.J.; Cheng, X.
2015-01-01
Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures
Sub-channel/system coupled code development and its application to SCWR-FQT loop
Energy Technology Data Exchange (ETDEWEB)
Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)
2015-04-15
Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures
A Novel Criterion for Optimum MultilevelCoding Systems in Mobile Fading Channels
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
A novel criterion that is "capac-ity rule" and "mapping rule" for the design of op-timum MLC scheme over mobile fading channels isproposed.According to this theory,the performanceof multilevel coding with multistage decoding schemes(MLC/MSD) in mobile fading channels is investi-gated,in which BCH codes are chosen as componentcodes,and three mapping strategies with 8ASK mod-ulation are used.Numerical results indicate that whencode rates of component codes in MLC scheme are de-signed based on "capacity rule",the performance ofthe system with block partitioning (BP) is optimumfor Rayleigh fading channels,while the performance ofthe system with Ungerboeck partioning (UP) is bestfor AWGN channels.
DELOCA, a code for simulation of CANDU fuel channel in thermal transients
International Nuclear Information System (INIS)
Mihalache, M.; Florea, Silviu; Ionescu, V.; Pavelescu, M.
2005-01-01
Full text: In certain LOCA scenarios into the CANDU fuel channel, the ballooning of the pressure tube and the contact with the calandria tube can occur. After the contact moment, a radial heat transfer from cooling fluid to moderator arises through the contact area. If the temperature of channel walls increases, the contact area is drying, the heat transfer becomes inefficiently and the fuel channel could lose its integrity. DELOCA code was developed to simulate the mechanical behaviour of pressure tube during pre-contact transition, and mechanical and thermal behaviour of pressure tube and calandria tube after the contact between the two tubes. The code contains a few models: the creep of Zr-2.5%Nb alloy, the heat transfer by conduction through the cylindrical walls, channel failure criteria and calculus of heat transfer at the calandria tube - moderator interface. This code evaluates the contact and channel failure moments. This code was systematically verified by Contact1 and Cathena codes. This paper presents the results obtained at different temperature increasing rates. In addition, the contact moment for a RIH 5% postulated accident was calculated. The Cathena thermo-hydraulic code provided the input data. (authors)
DELOCA, a code for simulation of CANDU fuel channel in thermal transients
International Nuclear Information System (INIS)
Mihalache, M.; Florea, Silviu; Ionescu, V.; Pavelescu, M.
2005-01-01
In certain LOCA scenarios into the CANDU fuel channel, the ballooning of the pressure tube and the contact with the calandria tube can occur. After the contact moment, a radial heat transfer from cooling fluid to moderator arises through the contact area. If the temperature of channel walls increases, the contact area is drying, the heat transfer becomes inefficiently and the fuel channel could lose its integrity. DELOCA code was developed to simulate the mechanical behaviour of pressure tube during pre-contact transition, and mechanical and thermal behaviour of pressure tube and calandria tube after the contact between the two tubes. The code contains a few models: the creep of Zr-2.5%Nb alloy, the heat transfer by conduction through the cylindrical walls, channel failure criteria and calculus of heat transfer at the calandria tube - moderator interface. This code evaluates the contact and channel failure moments. This code was systematically verified by Contact1 and Cathena codes. This paper presents the results obtained at different temperature increasing rates. In addition, the contact moment for a RIH 5% postulated accident was calculated. The Cathena thermo-hydraulic code provided the input data. (authors)
Development of in-vessel source term analysis code, tracer
International Nuclear Information System (INIS)
Miyagi, K.; Miyahara, S.
1996-01-01
Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)
Energy efficient rateless codes for high speed data transfer over free space optical channels
Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.
2015-03-01
Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
William G. Cowley
2008-04-01
Full Text Available This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of 2ÃÂ€/M radians, where Ã¢Â€ÂœMÃ¢Â€Â is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over Ã¢Â„Â¤4 with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of 2Ã¢ÂˆÂ˜ standard deviation per symbol with small loss.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
Karuppasami Sridhar
2008-01-01
Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.
Java Source Code Analysis for API Migration to Embedded Systems
Energy Technology Data Exchange (ETDEWEB)
Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.
Source Coding for Wireless Distributed Microphones in Reverberant Environments
DEFF Research Database (Denmark)
Zahedi, Adel
2016-01-01
. However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...
Plagiarism Detection Algorithm for Source Code in Computer Science Education
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Automating RPM Creation from a Source Code Repository
2012-02-01
apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make
Source Coding in Networks with Covariance Distortion Constraints
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2016-01-01
results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
Signorella, J. D.; de Wet, A. P.; Bleacher, J. E.; Collins, A.; Schierl, Z. P.; Schwans, B.
2012-03-01
This study focuses on the source area of sinuous channels on the southeast rift apron on Ascraeus Mons, Mars and attempts to understand whether the channels were formed through volcanic or fluvial processes.
Coded aperture imaging of alpha source spatial distribution
International Nuclear Information System (INIS)
Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul
2012-01-01
The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.
Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel
DEFF Research Database (Denmark)
Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe
2016-01-01
This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...
Chelli, Ali
2013-08-01
In this paper, we consider a relay network consisting of a source, a relay, and a destination. The source transmits a message to the destination using hybrid automatic repeat request (HARQ). The relay overhears the transmitted messages over the different HARQ rounds and tries to decode the data packet. In case of successful decoding at the relay, both the relay and the source cooperate to transmit the message to the destination. The channel realizations are independent for different HARQ rounds. We assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions by the use of feedback. There are two major HARQ techniques, namely HARQ with incremental redundancy (IR) and HARQ with code combining (CC). We investigate the performance of HARQ-IR and HARQ-CC over a relay channel from an information theoretic perspective. Analytical expressions are derived for the information outage probability, the average number of transmissions, and the average transmission rate. We illustrate through our investigation the benefit of relaying. We also compare the performance of HARQ-IR and HARQ-CC and show that HARQ-IR outperforms HARQ-CC. © 2013 IEEE.
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels
Directory of Open Access Journals (Sweden)
IMTAWIL, V.
2011-11-01
Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.
Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images
Directory of Open Access Journals (Sweden)
Barni Mauro
2007-01-01
Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.
LDPC code decoding adapted to the precoded partial response magnetic recording channels
International Nuclear Information System (INIS)
Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-01-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems
LDPC code decoding adapted to the precoded partial response magnetic recording channels
Energy Technology Data Exchange (ETDEWEB)
Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-05-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.
Directory of Open Access Journals (Sweden)
Valérian Mannoni
2004-09-01
Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called Ã‚Â“irregularity profile.Ã‚Â” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.
Blind cooperative diversity using distributed space-time coding in block fading channels
Tourki, Kamel; Alouini, Mohamed-Slim; Deneire, Luc
2010-01-01
Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel
Error-Rate Bounds for Coded PPM on a Poisson Channel
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2012-03-01
Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.
The Astrophysics Source Code Library: Supporting software publication and citation
Allen, Alice; Teuben, Peter
2018-01-01
The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.
Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.
Rice, R. F.
1972-01-01
The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.
Channel coding study for ultra-low power wireless design of autonomous sensor works
Zhang, P.; Huang, Li; Willems, F.M.J.
2011-01-01
Ultra-low power wireless design is highly demanded for building up autonomous wireless sensor networks (WSNs) for many application areas. To keep certain quality of service with limited power budget, channel coding techniques can be applied to maintain the robustness and reliability of WSNs. In this
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Directory of Open Access Journals (Sweden)
Kai Lin
2016-07-01
Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.
Source Code Vulnerabilities in IoT Software Systems
Directory of Open Access Journals (Sweden)
Saleh Mohamed Alnaeli
2017-08-01
Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.
Verification test calculations for the Source Term Code Package
International Nuclear Information System (INIS)
Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.
1986-07-01
The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs
Tangent: Automatic Differentiation Using Source Code Transformation in Python
van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan
2017-01-01
Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...
Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels
DEFF Research Database (Denmark)
Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.
2014-01-01
We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...
The fuel and channel thermal/mechanical behaviour code FACTAR 2.0 (LOCA)
International Nuclear Information System (INIS)
Westbye, C.J.; Mackinnon, J.C.; Gu, B.W.
1996-01-01
The computer code FACTAR 2.0 (LOCA) models the thermal and mechanical response of components within a single CANDU fuel channel under loss-of-coolant accident conditions. This code version is the successor to the FACTAR 1.x code series, and features many modelling enhancements over its predecessor. In particular, the thermal hydraulic treatment has been extended to model reverse and bi-directional coolant flow, and the axial variation in coolant flow rate. Thermal radiation is calculated by a detailed surface-to-surface model, and the ability to represent a greater range of geometries (including experimental configurations employed in code validation) has been implemented. Details of these new code treatments are described in this paper. (author)
Development of a computer code for thermohydraulic analysis of a heated channel in transients
International Nuclear Information System (INIS)
Jafari, J.; Kazeminejad, H.; Davilu, H.
2004-01-01
This paper discusses the thermohydraulic analysis of a heated channel of a nuclear reactor in transients by a computer code that has been developed by the writer. The considered geometry is a channel of a nuclear reactor with cylindrical or planar fuel rods. The coolant is water and flows from the outer surface of the fuel rod. To model the heat transfer in the fuel rod, two dimensional time dependent conduction equations has been solved by combination of numerical methods, O rthogonal Collocation Method in radial direction and finite difference method in axial direction . For coolant modelling the single phase time dependent energy equation has been used and solved by finite difference method . The combination of the first module that solves the conduction in the fuel rod and a second one that solved the energy balance in the coolant region constitute the computer code (Thyc-1) to analysis thermohydraulic of a heated channel in transients. The Orthogonal collocation method maintains the accuracy and computing time of conventional finite difference methods, while the computer storage is reduced by a factor of two. The same problem has been modelled by RELAP5/M3 system code to asses the validity of the Thyc-1 code. The good agreement of the results qualifies the developed code
Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih
2016-04-21
Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.
Delay reduction in persistent erasure channels for generalized instantly decodable network coding
Sorour, Sameh
2013-06-01
In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). By persistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, the channel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We first formulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard, we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certain ranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNC algorithms. © 2013 IEEE.
Delay reduction in persistent erasure channels for generalized instantly decodable network coding
Sorour, Sameh; Aboutorab, Neda; Sadeghi, Parastoo; Karim, Mohammad Shahriar; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2013-01-01
In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). By persistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, the channel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We first formulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard, we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certain ranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNC algorithms. © 2013 IEEE.
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad
2013-05-27
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel
Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim; Zafar, Ammar
2013-01-01
In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.
Improvement of Source Number Estimation Method for Single Channel Signal.
Directory of Open Access Journals (Sweden)
Zhi Dong
Full Text Available Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin's disk estimation (GDE and minimum description length (MDL, are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely.
The coding theorem for a class of quantum channels with long-term memory
International Nuclear Information System (INIS)
Datta, Nilanjana; Dorlas, Tony C
2007-01-01
In this paper, we consider the transmission of classical information through a class of quantum channels with long-term memory, which are convex combinations of memoryless channels. Hence, the memory of such channels can be considered to be given by a Markov chain which is aperiodic but not irreducible. We prove the coding theorem and weak converse for this class of channels. The main techniques that we employ are a quantum version of Feinstein's fundamental lemma (Feinstein A 1954 IRE Trans. PGIT 4 2-22, Khinchin A I 1957 Mathematical Foundations of Information Theory: II. On the Fundamental Theorems of Information Theory (New York: Dover) chapter IV) and a generalization of Helstrom's theorem (Helstrom C W 1976 Quantum detection and estimation theory Mathematics in Science and Engineering vol 123 (London: Academic))
Space-Time Coded MC-CDMA: Blind Channel Estimation, Identifiability, and Receiver Design
Directory of Open Access Journals (Sweden)
Li Hongbin
2002-01-01
Full Text Available Integrating the strengths of multicarrier (MC modulation and code division multiple access (CDMA, MC-CDMA systems are of great interest for future broadband transmissions. This paper considers the problem of channel identification and signal combining/detection schemes for MC-CDMA systems equipped with multiple transmit antennas and space-time (ST coding. In particular, a subspace based blind channel identification algorithm is presented. Identifiability conditions are examined and specified which guarantee unique and perfect (up to a scalar channel estimation when knowledge of the noise subspace is available. Several popular single-user based signal combining schemes, namely the maximum ratio combining (MRC and the equal gain combining (EGC, which are often utilized in conventional single-transmit-antenna based MC-CDMA systems, are extended to the current ST-coded MC-CDMA (STC-MC-CDMA system to perform joint combining and decoding. In addition, a linear multiuser minimum mean-squared error (MMSE detection scheme is also presented, which is shown to outperform the MRC and EGC at some increased computational complexity. Numerical examples are presented to evaluate and compare the proposed channel identification and signal detection/combining techniques.
Validation of system codes RELAP5 and SPECTRA for natural convection boiling in narrow channels
Energy Technology Data Exchange (ETDEWEB)
Stempniewicz, M.M., E-mail: stempniewicz@nrg.eu; Slootman, M.L.F.; Wiersema, H.T.
2016-10-15
Highlights: • Computer codes RELAP5/Mod3.3 and SPECTRA 3.61 validated for boiling in narrow channels. • Validated codes can be used for LOCA analyses in research reactors. • Code validation based on natural convection boiling in narrow channels experiments. - Abstract: Safety analyses of LOCA scenarios in nuclear power plants are performed with so called thermal–hydraulic system codes, such as RELAP5. Such codes are validated for typical fuel geometries applied in nuclear power plants. The question considered by this article is if the codes can be applied for LOCA analyses in research reactors, in particular exceeding CHF in very narrow channels. In order to answer this question, validation calculations were performed with two thermal–hydraulic system codes: RELAP and SPECTRA. The validation was based on natural convection boiling in narrow channels experiments, performed by Prof. Monde et al. in the years 1990–2000. In total 42 vertical tube and annulus experiments were simulated with both codes. A good agreement of the calculated values with the measured data was observed. The main conclusions are: • The computer codes RELAP5/Mod 3.3 (US NRC version) and SPECTRA 3.61 have been validated for natural convection boiling in narrow channels using experiments of Monde. The dimensions applied in the experiments were performed for a range that covers the values observed in typical research reactors. Therefore it is concluded that both codes are validated and can be used for LOCA analyses in research reactors, including natural convection boiling. The applicability range of the present validation is: hydraulic diameters of 1.1 ⩽ D{sub hyd} ⩽ 9.0 mm, heated lengths of 0.1 ⩽ L ⩽ 1.0 m, pressures of 0.10 ⩽ P ⩽ 0.99 MPa. In most calculations the burnout was predicted to occur at lower power than that observed in the experiments. In several cases the burnout was observed at higher power. The overprediction was not larger than 16% in RELAP and 15% in
BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL
Directory of Open Access Journals (Sweden)
Surbhi Sharma
2010-06-01
Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the ‘m’ parameter increases (Less fading.
Position-based coding and convex splitting for private communication over quantum channels
Wilde, Mark M.
2017-10-01
The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.
Directory of Open Access Journals (Sweden)
Sonia Aïssa
2008-05-01
Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.
Health physics source document for codes of practice
International Nuclear Information System (INIS)
Pearson, G.W.; Meggitt, G.C.
1989-05-01
Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)
Running the source term code package in Elebra MX-850
International Nuclear Information System (INIS)
Guimaraes, A.C.F.; Goes, A.G.A.
1988-01-01
The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)
Microdosimetry computation code of internal sources - MICRODOSE 1
International Nuclear Information System (INIS)
Li Weibo; Zheng Wenzhong; Ye Changqing
1995-01-01
This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs
SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations
Energy Technology Data Exchange (ETDEWEB)
Adams, C. H.
1976-07-01
This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center.
Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel
Directory of Open Access Journals (Sweden)
Declercq David
2007-01-01
Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.
SYN3D: a single-channel, spatial flux synthesis code for diffusion theory calculations
International Nuclear Information System (INIS)
Adams, C.H.
1976-07-01
This report is a user's manual for SYN3D, a computer code which uses single-channel, spatial flux synthesis to calculate approximate solutions to two- and three-dimensional, finite-difference, multigroup neutron diffusion theory equations. SYN3D is designed to run in conjunction with any one of several one- and two-dimensional, finite-difference codes (required to generate the synthesis expansion functions) currently being used in the fast reactor community. The report describes the theory and equations, the use of the code, and the implementation on the IBM 370/195 and CDC 7600 of the version of SYN3D available through the Argonne Code Center
Large Eddy Simulation of turbulent flows in compound channels with a finite element code
International Nuclear Information System (INIS)
Xavier, C.M.; Petry, A.P.; Moeller, S.V.
2011-01-01
This paper presents the numerical investigation of the developing flow in a compound channel formed by a rectangular main channel and a gap in one of the sidewalls. A three dimensional Large Eddy Simulation computational code with the classic Smagorinsky model is introduced, where the transient flow is modeled through the conservation equations of mass and momentum of a quasi-incompressible, isothermal continuous medium. Finite Element Method, Taylor-Galerkin scheme and linear hexahedrical elements are applied. Numerical results of velocity profile show the development of a shear layer in agreement with experimental results obtained with Pitot tube and hot wires. (author)
Joint beam design and user selection over non-binary coded MIMO interference channel
Li, Haitao; Yuan, Haiying
2013-03-01
In this paper, we discuss the problem of sum rate improvement for coded MIMO interference system, and propose joint beam design and user selection over interference channel. Firstly, we have formulated non-binary LDPC coded MIMO interference networks model. Then, the least square beam design for MIMO interference system is derived, and the low complexity user selection is presented. Simulation results confirm that the sum rate can be improved by the joint user selection and beam design comparing with single interference aligning beamformer.
Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels
Directory of Open Access Journals (Sweden)
O. Al Rasheed
2013-11-01
Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.
System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels
Directory of Open Access Journals (Sweden)
Kam Tai Chan
2005-05-01
Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time.
On the calculation of the minimax-converse of the channel coding problem
Elkayam, Nir; Feder, Meir
2015-01-01
A minimax-converse has been suggested for the general channel coding problem by Polyanskiy etal. This converse comes in two flavors. The first flavor is generally used for the analysis of the coding problem with non-vanishing error probability and provides an upper bound on the rate given the error probability. The second flavor fixes the rate and provides a lower bound on the error probability. Both converses are given as a min-max optimization problem of an appropriate binary hypothesis tes...
COMPASS: A source term code for investigating capillary barrier performance
International Nuclear Information System (INIS)
Zhou, Wei; Apted, J.J.
1996-01-01
A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock
Development, verification and validation of the fuel channel behaviour computer code FACTAR
Energy Technology Data Exchange (ETDEWEB)
Westbye, C J; Brito, A C; MacKinnon, J C; Sills, H E; Langman, V J [Ontario Hydro, Toronto, ON (Canada)
1996-12-31
FACTAR (Fuel And Channel Temperature And Response) is a computer code developed to simulate the transient thermal and mechanical behaviour of 37-element or 28-element fuel bundles within a single CANDU fuel channel for moderate loss of coolant accident conditions including transition and large break LOCA`s (loss of coolant accidents) with emergency coolant injection assumed available. FACTAR`s predictions of fuel temperature and sheath failure times are used to subsequent assessment of fission product releases and fuel string expansion. This paper discusses the origin and development history of FACTAR, presents the mathematical models and solution technique, the detailed quality assurance procedures that are followed during development, and reports the future development of the code. (author). 27 refs., 3 figs.
On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels
Zafar, Ammar
2013-02-20
In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
An upper bound for codes for the noisy two-access binary adder channel
Tilborg, van H.C.A.
1986-01-01
Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +
Improving 3D-Turbo Code's BER Performance with a BICM System over Rayleigh Fading Channel
Directory of Open Access Journals (Sweden)
R. Yao
2016-12-01
Full Text Available Classical Turbo code suffers from high error floor due to its small Minimum Hamming Distance (MHD. Newly-proposed 3D-Turbo code can effectively increase the MHD and achieve a lower error floor by adding a rate-1 post encoder. In 3D-Turbo codes, part of the parity bits from the classical Turbo encoder are further encoded through the post encoder. In this paper, a novel Bit-Interleaved Coded Modulation (BICM system is proposed by combining rotated mapping Quadrature Amplitude Modulation (QAM and 3D-Turbo code to improve the Bit Error Rate (BER performance of 3D-Turbo code over Raleigh fading channel. A key-bit protection scheme and a Two-Dimension (2D iterative soft demodulating-decoding algorithm are developed for the proposed BICM system. Simulation results show that the proposed system can obtain about 0.8-1.0 dB gain at BER of 10^{-6}, compared with the existing BICM system with Gray mapping QAM.
An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels
Directory of Open Access Journals (Sweden)
Jian Li
2005-04-01
Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.
RBMK fuel channel blockage analysis by MCNP5, DRAGON and RELAP5-3D codes
International Nuclear Information System (INIS)
Parisi, C.; D'Auria, F.
2007-01-01
The aim of this work was to perform precise criticality analyses by Monte-Carlo code MCNP5 for a Fuel Channel (FC) flow blockage accident, considering as calculation domain a single FC and a 3x3 lattice of RBMK cells. Boundary conditions for MCNP5 input were derived by a previous transient calculation by state-of-the-art codes HELIOS/RELAP5-3D. In a preliminary phase, suitable MCNP5 models of a single cell and of a small lattice of RBMK cells were set-up; criticality analyses were performed at reference conditions for 2.0% and 2.4% enriched fuel. These analyses were compared with results obtained by University of Pisa (UNIPI) using deterministic transport code DRAGON and with results obtained by NIKIET Institute using MCNP4C. Then, the changes of the main physical parameters (e.g. fuel and water/steam temperature, water density, graphite temperature) at different time intervals of the FC blockage transient were evaluated by a RELAP5-3D calculation. This information was used to set up further MCNP5 inputs. Criticality analyses were performed for different systems (single channel and lattice) at those transient' states, obtaining global criticality versus transient time. Finally the weight of each parameter's change (fuel overheating and channel voiding) on global criticality was assessed. The results showed that reactivity of a blocked FC is always negative; nevertheless, when considering the effect of neighboring channels, the global reactivity trend reverts, becoming slightly positive or not changing at all, depending in inverse relation to the fuel enrichment. (author)
Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels
Chelli, Ali
2018-01-15
This paper proposes the use of hybrid automatic repeat request (HARQ) with code combining (HARQ-CC) to offer reliable communications over double Rayleigh channels. The double Rayleigh fading channel is of particular interest to vehicle-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the
A Comparison of Source Code Plagiarism Detection Engines
Lancaster, Thomas; Culwin, Fintan
2004-06-01
Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.
Source Code Verification for Embedded Systems using Prolog
Directory of Open Access Journals (Sweden)
Frank Flederer
2017-01-01
Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.
International Nuclear Information System (INIS)
Bouzid, M.; Benkherouf, H.; Benzadi, K.
2011-01-01
In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.
A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.
Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel
2004-06-21
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Abediseid, Walid
2012-01-01
complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder's computational complexity. We show that when the computational complexity exceeds
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
A thermal-hydraulic code for transient analysis in a channel with a rod bundle
International Nuclear Information System (INIS)
Khodjaev, I.D.
1995-01-01
The paper contains the model of transient vapor-liquid flow in a channel with a rod bundle of core of a nuclear power plant. The computer code has been developed to predict dryout and post-dryout heat transfer in rod bundles of nuclear reactor core under loss-of-coolant accidents. Economizer, bubble, dispersed-annular and dispersed regimes are taken into account. The computer code provides a three-field representation of two-phase flow in the dispersed-annular regime. Continuous vapor, continuous liquid film and entrained liquid drops are three fields. For the description of dispersed flow regime two-temperatures and single-velocity model is used. Relative droplet motion is taken into account for the droplet-to-vapor heat transfer. The conservation equations for each of regimes are solved using an effective numerical technique. This technique makes it possible to determine distribution of the parameters of flows along the perimeter of fuel elements. Comparison of the calculated results with the experimental data shows that the computer code adequately describes complex processes in a channel with a rod bundle during accident
A thermal-hydraulic code for transient analysis in a channel with a rod bundle
Energy Technology Data Exchange (ETDEWEB)
Khodjaev, I.D. [Research & Engineering Centre of Nuclear Plants Safety, Electrogorsk (Russian Federation)
1995-09-01
The paper contains the model of transient vapor-liquid flow in a channel with a rod bundle of core of a nuclear power plant. The computer code has been developed to predict dryout and post-dryout heat transfer in rod bundles of nuclear reactor core under loss-of-coolant accidents. Economizer, bubble, dispersed-annular and dispersed regimes are taken into account. The computer code provides a three-field representation of two-phase flow in the dispersed-annular regime. Continuous vapor, continuous liquid film and entrained liquid drops are three fields. For the description of dispersed flow regime two-temperatures and single-velocity model is used. Relative droplet motion is taken into account for the droplet-to-vapor heat transfer. The conservation equations for each of regimes are solved using an effective numerical technique. This technique makes it possible to determine distribution of the parameters of flows along the perimeter of fuel elements. Comparison of the calculated results with the experimental data shows that the computer code adequately describes complex processes in a channel with a rod bundle during accident.
How could the replica method improve accuracy of performance assessment of channel coding?
Energy Technology Data Exchange (ETDEWEB)
Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp
2009-12-01
We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
On the performance of diagonal lattice space-time codes for the quasi-static MIMO channel
Abediseid, Walid; Alouini, Mohamed-Slim
2013-01-01
There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple-output (MIMO) channel. All the coding design to date focuses on either high-performance, high rates, low complexity encoding and decoding
Modelling RF sources using 2-D PIC codes
Energy Technology Data Exchange (ETDEWEB)
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
Modelling RF sources using 2-D PIC codes
Energy Technology Data Exchange (ETDEWEB)
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
Modelling RF sources using 2-D PIC codes
International Nuclear Information System (INIS)
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation
International Nuclear Information System (INIS)
Liu, X.J.; Yang, T.; Cheng, X.
2014-01-01
To analyze the local thermal-hydraulic parameters in the supercritical water reactor-fuel qualification test (SCWR-FQT) fuel bundle with a flow blockage, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code and system code are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal-hydraulic parameters are predicted by the sub-channel code COBRA-SC. Sensitivity analysis are carried out respectively in ATHLET-SC and COBRA-SC code, to identify the appropriate models for description of the flow blockage phenomenon in the test loop. Some measures to mitigate the accident consequence are also trialed to demonstrate their effectiveness. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel assembly can be reduced effectively by the safety measures of SCWR-FQT. (author)
Zhao, Yaqin; Zhong, Xin; Wu, Di; Zhang, Ye; Ren, Guanghui; Wu, Zhilu
2013-09-01
Optical code-division multiple access (OCDMA) systems usually allocate orthogonal or quasi-orthogonal codes to the active users. When transmitting through atmospheric scattering channel, the coding pulses are broadened and the orthogonality of the codes is worsened. In truly asynchronous case, namely both the chips and the bits are asynchronous among each active user, the pulse broadening affects the system performance a lot. In this paper, we evaluate the performance of a 2D asynchronous hard-limiting wireless OCDMA system through atmospheric scattering channel. The probability density function of multiple access interference in truly asynchronous case is given. The bit error rate decreases as the ratio of the chip period to the root mean square delay spread increases and the channel limits the bit rate to different levels when the chip period varies.
Douik, Ahmed S.
2015-11-05
This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.
Investigation of flow blockage in a fuel channel with the ASSERT subchannel code
International Nuclear Information System (INIS)
Harvel, G.D.; Dam, R.; Soulard, M.
1996-01-01
On behalf of New Brunswick Power, a study was undertaken to determine if safe operation of a CANDU-6 reactor can be maintained at low reactor powers with the presence of debris in the fuel channels. In particular, the concern was to address if a small blockage due to the presence of debris would cause a significant reduction in dryout powers, and hence, to determine the safe operation power level to maintain dryout margins. In this work the NUCIRC(1,2), ASSERT-IV(3), and ASSERT-PV(3) computer codes are used in conjunction with a pool boiling model to determine the safe operation power level which maintains dryout safety margins. NUCIRC is used to provide channel boundary conditions for the ASSERTcodes and to select a representative channel for analysis. This pool boiling model is provided as a limiting lower bound analysis. As expected, the ASSERT results predict higher CHF ratios than the pool boiling model. In general, the ASSERT results show that as the model comes closer to modelling a complete blockage it reduces toward, but does not reach the pool boiling model. (author)
Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2015-01-01
This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.
Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley
2018-05-01
We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.
OSSMETER D3.4 – Language-Specific Source Code Quality Analysis
J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)
2014-01-01
htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and
The Breakdown: Hillslope Sources of Channel Blocks in Bedrock Landscapes
Selander, B.; Anderson, S. P.; Rossi, M.
2017-12-01
Block delivery from hillslopes is a poorly understood process that influences bedrock channel incision rates and shapes steep terrain. Previous studies demonstrate that hillslope sediment delivery rate and grain size increases with channel downcutting rate or fracture density (Attal et al., 2015, ESurf). However, blocks that exceed the competence of the channel can inhibit incision. In Boulder Creek, a bedrock channel in the Colorado Front Range, large boulders (>1 m diameter) are most numerous in the steepest channel reaches; their distribution seems to reflect autogenic channel-hillslope feedback between incision rate and block delivery (Shobe et al., 2016, GRL). It is clear that the processes, rates of production, and delivery of large blocks from hillslopes into channels are critical to our understanding of steep terrain evolution. Fundamental questions are 1) whether block production or block delivery is rate limiting, 2) what mechanisms release blocks, and 3) how block production and transport affect slope morphology. As a first step, we map rock outcrops on the granodiorite hillslopes lining Boulder Creek within Boulder Canyon using a high resolution DEM. Our algorithm uses high ranges of curvature values in conjunction with slopes steeper than the angle of repose to quickly identify rock outcrops. We field verified mapped outcrop and sediment-mantled locations on hillslopes above and below the channel knickzone. We find a greater abundance of exposed rock outcrops on steeper hillslopes in Boulder Canyon. Additionally, we find that channel reaches with large in-channel blocks are located at the base of hillslopes with large areas of exposed bedrock, while reaches lacking large in-channel blocks tend to be at the base of predominately soil mantled and forested hillslopes. These observations support the model of block delivery and channel incision of Shobe et al. (2016, GRL). Moreover, these results highlight the conundrum of how rapid channel incision is
Joint nonbinary low-density parity-check codes and modulation diversity over fading channels
Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei
2010-09-01
A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels.
Benchmark evaluation of the RELAP code to calculate boiling in narrow channels
International Nuclear Information System (INIS)
Kunze, J.F.; Loyalka, S.K.; McKibben, J.C.; Hultsch, R.; Oladiran, O.
1990-01-01
The RELAP code has been tested with benchmark experiments (such as the loss-of-fluid test experiments at the Idaho National Engineering Laboratory) at high pressures and temperatures characteristic of those encountered in loss-of-coolant accidents (LOCAs) in commercial light water power reactors. Application of RELAP to the LOCA analysis of a low pressure (< 7 atm) and low temperature (< 100 degree C), plate-type research reactor, such as the University of Missouri Research Reactor (MURR), the high-flux breeder reactor, high-flux isotope reactor, and Advanced Test Reactor, requires resolution of questions involving overextrapolation to very low pressures and low temperatures, and calculations of the pulsed boiling/reflood conditions in the narrow rectangular cross-section channels (typically 2 mm thick) of the plate fuel elements. The practical concern of this problem is that plate fuel temperatures predicted by RELAP5 (MOD2, version 3) during the pulsed boiling period can reach high enough temperatures to cause plate (clad) weakening, though not melting. Since an experimental benchmark of RELAP under such LOCA conditions is not available and since such conditions present substantial challenges to the code, it is important to verify the code predictions. The comparison of the pulsed boiling experiments with the RELAP calculations involves both visual observations of void fraction versus time and measurements of temperatures near the fuel plate surface
DEFF Research Database (Denmark)
Vigeant, Michelle; Wang, Lily M.; Rindel, Jens Holger
2008-01-01
a multi-channel multi-source auralization technique, involving individual five-channel anechoic recordings of each instrumental part of two symphonies. In the first study, these auralizations were subjectively compared to orchestra auralizations made using (a) a single omni-directional source, (b......) a surface source, and (c) single-channel multi-source method. Results show that the multi-source auralizations were rated to be more realistic than the surface source ones and to have larger source width than the single omni-directional source auralizations. No significant differences were found between......Room acoustics computer modeling is a tool for generating impulse responses and auralizations from modeled spaces. The auralizations are commonly made from a single-channel anechoic recording of solo instruments. For this investigation, auralizations of an entire orchestra were created using...
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
Development and assessment of a sub-channel code applicable for trans-critical transient of SCWR
International Nuclear Information System (INIS)
Liu, X.J.; Yang, T.; Cheng, X.
2013-01-01
Highlights: • A new sub-channel code COBRA-SC for SCWR is developed. • Pseudo two-phase method is employed to realize trans-critical transient calculation. • Good suitability of COBRA-SC is demonstrated by preliminary assessment. • The calculation results of COBRA-SC agree well with ATHLET code. -- Abstract: In the last few years, extensive R and D activities have been launched covering various aspects of supercritical water-cooled reactor (SCWR), especially the thermal-hydraulic analysis. Sub-channel code plays an indispensable role to predict the detail thermal-hydraulic behavior of the SCWR fuel assembly. This paper develops a new version of sub-channel code COBRA-SC based on the previous COBRA-IV code. The supercritical water property and heat transfer/pressure drop correlations under supercritical pressure are implemented to this code. Moreover, in order to simulate the trans-critical transient (the pressure undergo a decrease from the supercritical pressure to the subcritical pressure), pseudo two-phase method is employed in COBRA-SC code. This work is completed by introduction of a virtual two-phase region near the pseudo-critical line. A smooth transition of void fraction can be realized. In addition, several heat transfer correlations right underneath the critical point are introduced into this code to capture the heat transfer behavior during the trans-critical transient. Some experimental data from simple geometry, e.g. the single tube, small rod bundle, is used to validate and evaluate this new developed COBRA-SC code. The predicted results show a good agreement with the experimental data, demonstrating good feasibility of this code for SCWR condition. A code to code comparison between COBRA-SC and ATHLET for a blowdown transient of a small fuel assembly is also presented and discussed in this paper
Neutron spallation source and the Dubna cascade code
Kumar, V; Goel, U; Barashenkov, V S
2003-01-01
Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)
International Nuclear Information System (INIS)
Bujan, A.; Adamik, V.; Misak, J.
1986-01-01
A brief description is presented of the expansion of the SICHTA-83 computer code for the analysis of the thermal history of the fuel channel for large LOCAs by modelling the mechanical behaviour of fuel element cladding. The new version of the code has a more detailed treatment of heat transfer in the fuel-cladding gap because it also respects the mechanical (plastic) deformations of the cladding and the fuel-cladding interaction (magnitude of contact pressure). Also respected is the change in pressure of the gas filling of the fuel element, the mechanical criterion is considered of a failure of the cladding and the degree is considered of the blockage of the through-flow cross section for coolant flow in the fuel channel. The LOCA WWER-440 model computation provides a comparison of the new SICHTA-85/MOD 1 code with the results of the original 83 version of SICHTA. (author)
Measuring propagation delay over a coded serial communication channel using FPGAs
International Nuclear Information System (INIS)
Jansweijer, P.P.M.; Peek, H.Z.
2011-01-01
Measurement and control applications are increasingly using distributed system technologies. In such applications, which may be spread over large distances, it is often necessary to synchronize system timing and know with great precision the time offsets between parts of the system. Measuring the propagation delay over a coded serial communication channel using serializer/deserializer (SerDes) functionality in FPGAs is described. The propagation delay between transmitter and receiver is measured with a resolution of a single unit interval (i.e. a serial link running at 3.125 Gbps provides a 320 ps resolution). The technique has been demonstrated to work over 100 km fibre to verify the feasibility for application in the future KM3NeT telescope.
L-type calcium channels refine the neural population code of sound level
Grimsley, Calum Alex; Green, David Brian
2016-01-01
The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536
Stars with shell energy sources. Part 1. Special evolutionary code
International Nuclear Information System (INIS)
Rozyczka, M.
1977-01-01
A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)
On the equivalence of Ising models on ‘small-world’ networks and LDPC codes on channels with memory
International Nuclear Information System (INIS)
Neri, Izaak; Skantzos, Nikos S
2014-01-01
We demonstrate the equivalence between thermodynamic observables of Ising spin-glass models on small-world lattices and the decoding properties of error-correcting low-density parity-check codes on channels with memory. In particular, the self-consistent equations for the effective field distributions in the spin-glass model within the replica symmetric ansatz are equivalent to the density evolution equations forr Gilbert–Elliott channels. This relationship allows us to present a belief-propagation decoding algorithm for finite-state Markov channels and to compute its performance at infinite block lengths from the density evolution equations. We show that loss of reliable communication corresponds to a first order phase transition from a ferromagnetic phase to a paramagnetic phase in the spin glass model. The critical noise levels derived for Gilbert–Elliott channels are in very good agreement with existing results in coding theory. Furthermore, we use our analysis to derive critical noise levels for channels with both memory and asymmetry in the noise. The resulting phase diagram shows that the combination of asymmetry and memory in the channel allows for high critical noise levels: in particular, we show that successful decoding is possible at any noise level of the bad channel when the good channel is good enough. Theoretical results at infinite block lengths using density evolution equations aree compared with average error probabilities calculated from a practical implementation of the corresponding decoding algorithms at finite block lengths. (paper)
International Nuclear Information System (INIS)
Bilanovic, Z.; McCracken, D.R.
1994-12-01
In order to assess irradiation-induced corrosion effects, coolant radiolysis and the degradation of the physical properties of reactor materials and components, it is necessary to determine the neutron, photon, and electron energy deposition profiles in the fuel channels of the reactor core. At present, several different computer codes must be used to do this. The most recent, advanced and versatile of these is the latest version of MCNP, which may be capable of replacing all the others. Different codes have different assumptions and different restrictions on the way they can model the core physics and geometry. This report presents the results of ANISN and MCNP models of neutron and photon energy deposition. The results validate the use of MCNP for simplified geometrical modelling of energy deposition by neutrons and photons in the complex geometry of the CANDU reactor fuel channel. Discrete ordinates codes such as ANISN were the benchmark codes used in previous work. The results of calculations using various models are presented, and they show very good agreement for fast-neutron energy deposition. In the case of photon energy deposition, however, some modifications to the modelling procedures had to be incorporated. Problems with the use of reflective boundaries were solved by either including the eight surrounding fuel channels in the model, or using a boundary source at the bounding surface of the problem. Once these modifications were incorporated, consistent results between the computer codes were achieved. Historically, simple annular representations of the core were used, because of the difficulty of doing detailed modelling with older codes. It is demonstrated that modelling by MCNP, using more accurate and more detailed geometry, gives significantly different and improved results. (author). 9 refs., 12 tabs., 20 figs
DEFF Research Database (Denmark)
Cavalcante, Lucas Costa Pereira; Silveira, Luiz F. Q.; Rommel, Simon
2016-01-01
Millimeter wave communications based on photonic technologies have gained increased attention to provide optic fiber-like capacity in wireless environments. However, the new hybrid fiber-wireless channel represents new challenges in terms of signal transmission performance analysis. Traditionally......, such systems use diversity schemes in combination with digital signal processing (DSP) techniques to overcome effects such as fading and inter-symbol interference (ISI). Wavelet Channel Coding (WCC) has emerged as a technique to minimize the fading effects of wireless channels, which is a mayor challenge...... in systems operating in the millimeter wave regime. This work takes the WCC one step beyond by performance evaluation in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, the main international...
Worst configurations (instantons) for compressed sensing over reals: a channel coding approach
International Nuclear Information System (INIS)
Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane
2010-01-01
We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.
Directory of Open Access Journals (Sweden)
Buzzi Stefano
2006-01-01
Full Text Available The problem of joint channel estimation, equalization, and multiuser detection for a multiantenna DS/CDMA system operating over a frequency-selective fading channel and adopting long aperiodic spreading codes is considered in this paper. First of all, we present several channel estimation and multiuser data detection schemes suited for multiantenna long-code DS/CDMA systems. Then, a multipass strategy, wherein the data detection and the channel estimation procedures exchange information in a recursive fashion, is introduced and analyzed for the proposed scenario. Remarkably, this strategy provides, at the price of some attendant computational complexity increase, excellent performance even when very short training sequences are transmitted, and thus couples together the conflicting advantages of both trained and blind systems, that is, good performance and no wasted bandwidth, respectively. Space-time coded systems are also considered, and it is shown that the multipass strategy provides excellent results for such systems also. Likewise, it is also shown that excellent performance is achieved also when each user adopts the same spreading code for all of its transmit antennas. The validity of the proposed procedure is corroborated by both simulation results and analytical findings. In particular, it is shown that adopting the multipass strategy results in a remarkable reduction of the channel estimation mean-square error and of the optimal length of the training sequence.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments
Kermek, Dragutin; Novak, Matija
2016-01-01
In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…
OSSMETER D3.2 – Report on Source Code Activity Metrics
J.J. Vinju (Jurgen); A. Shahi (Ashim)
2014-01-01
htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been
A Proposed Chaotic-Switched Turbo Coding Design and Its Application for Half-Duplex Relay Channel
Directory of Open Access Journals (Sweden)
Tamer H. M. Soliman
2015-01-01
Full Text Available Both reliability and security are two important subjects in modern digital communications, each with a variety of subdisciplines. In this paper we introduce a new proposed secure turbo coding system which combines chaotic dynamics and turbo coding reliability together. As we utilize the chaotic maps as a tool for hiding and securing the coding design in turbo coding system, this proposed system model can provide both data secrecy and data reliability in one process to combat problems in an insecure and unreliable data channel link. To support our research, we provide different schemes to design a chaotic secure reliable turbo coding system which we call chaotic-switched turbo coding schemes. In these schemes the design of turbo codes chaotically changed depending on one or more chaotic maps. Extensions of these chaotic-switched turbo coding schemes to half-duplex relay systems are also described. Results of simulations of these new secure turbo coding schemes are compared to classical turbo codes with the same coding parameters and the proposed system is able to achieve secured reasonable bit error rate performance when it is made to switch between different puncturing and design configuration parameters especially with low switching rates.
International Nuclear Information System (INIS)
Xia, Yan; Song, He-Shan
2007-01-01
We present a controlled quantum secure direct communication protocol that uses a 2-dimensional Greenberger-Horne-Zeilinger (GHZ) entangled state and a 3-dimensional Bell-basis state and employs the high-dimensional quantum superdense coding, local collective unitary operations and entanglement swapping. The proposed protocol is secure and of high source capacity. It can effectively protect the communication against a destroying-travel-qubit-type attack. With this protocol, the information transmission is greatly increased. This protocol can also be modified, so that it can be used in a multi-party control system
International Nuclear Information System (INIS)
Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.
1988-01-01
The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs
Directory of Open Access Journals (Sweden)
Markku Renfors
2007-12-01
Full Text Available The ever-increasing public interest in location and positioning services has originated a demand for higher performance global navigation satellite systems (GNSSs. In order to achieve this incremental performance, the estimation of line-of-sight (LOS delay with high accuracy is a prerequisite for all GNSSs. The delay lock loops (DLLs and their enhanced variants (i.e., feedback code tracking loops are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. In addition, the new satellite positioning system proposals specify the use of a new modulation, the binary offset carrier (BOC modulation, which triggers a new challenge in the code tracking stage. Therefore, in order to meet this emerging challenge and to improve the accuracy of the delay estimation in severe multipath scenarios, this paper analyzes feedback as well as feedforward code tracking algorithms and proposes the peak tracking (PT methods, which are combinations of both feedback and feedforward structures and utilize the inherent advantages of both structures. We propose and analyze here two variants of PT algorithm: PT with second-order differentiation (Diff2, and PT with Teager Kaiser (TK operator, which will be denoted herein as PT(Diff2 and PT(TK, respectively. In addition to the proposal of the PT methods, the authors propose also an improved early-late-slope (IELS multipath elimination technique which is shown to provide very good mean-time-to-lose-lock (MTLL performance. An implementation of a noncoherent multipath estimating delay locked loop (MEDLL structure is also presented. We also incorporate here an extensive review of the existing feedback and feedforward delay estimation algorithms for direct sequence code division multiple access (DS-CDMA signals in satellite fading channels, by taking into account the impact of binary phase shift keying (BPSK as well as the newly proposed BOC modulation
Open Genetic Code: on open source in the life sciences
Deibel, Eric
2014-01-01
The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...
Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems
2010-12-01
technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a
Open Genetic Code : On open source in the life sciences
Deibel, E.
2014-01-01
The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life
Maja Valles, Mars: A Multi-Source Fluvio-Volcanic Outflow Channel System
Keske, A.; Christensen, P. R.
2017-12-01
The resemblance of martian outflow channels to the channeled scablands of the Pacific Northwest has led to general consensus that they were eroded by large-scale flooding. However, the observation that many of these channels are coated in lava issuing from the same source as the water source has motivated the alternative hypothesis that the channels were carved by fluid, turbulent lava. Maja Valles is a circum-Chryse outflow channel whose origin was placed in the late Hesperian by Baker and Kochel (1979), with more recent studies of crater density variations suggesting that its formation history involved multiple resurfacing events (Chapman et al., 2003). In this study, we have found that while Maja Valles indeed host a suite of standard fluvial landforms, its northern portion is thinly coated with lava that has buried much of the older channel landforms and overprinted them with effusive flow features, such as polygons and bathtub rings. Adjacent to crater pedestals and streamlined islands are patches of dark, relatively pristine material pooled in local topographic lows that we have interpreted as ponds of lava remaining from one or more fluid lava flows that flooded the channel system and subsequently drained, leaving marks of the local lava high stand. Despite the presence of fluvial landforms throughout the valles, lava flow features exist in the northern reaches of the system alone, 500-1200 km from the channels' source. The flows can instead be traced to a collection of vents in Lunae Plaum, west of the valles. In previously studied fluvio-volcanic outflow systems, such as Athabasca Valles, the sources of the volcanic activity and fluvial activity have been indistinguishable. In contrast, Maja Valles features numerous fluvio-volcanic landforms bearing similarity to those identified in other channel systems, yet the source of its lava flows is distinct from the source of its channels. Furthermore, in the absence of any channels between the source of the lava
Directory of Open Access Journals (Sweden)
Simoens Frederik
2006-01-01
Full Text Available This paper concerns channel tracking in a multiantenna context for correlated flat-fading channels obeying a Gauss-Markov model. It is known that data-aided tracking of fast-fading channels requires a lot of pilot symbols in order to achieve sufficient accuracy, and hence decreases the spectral efficiency. To overcome this problem, we design a code-aided estimation scheme which exploits information from both the pilot symbols and the unknown coded data symbols. The algorithm is derived based on a factor graph representation of the system and application of the sum-product algorithm. The sum-product algorithm reveals how soft information from the decoder should be exploited for the purpose of estimation and how the information bits can be detected. Simulation results illustrate the effectiveness of our approach.
On the performance of diagonal lattice space-time codes for the quasi-static MIMO channel
Abediseid, Walid
2013-06-01
There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple-output (MIMO) channel. All the coding design to date focuses on either high-performance, high rates, low complexity encoding and decoding, or targeting a combination of these criteria. In this paper, we analyze in detail the performance of diagonal lattice space-time codes under lattice decoding. We present both upper and lower bounds on the average error probability. We derive a new closed form expression of the lower bound using the so-called sphere-packing bound. This bound presents the ultimate performance limit a diagonal lattice space-time code can achieve at any signal-to-noise ratio (SNR). The upper bound is simply derived using the union-bound and demonstrates how the average error probability can be minimized by maximizing the minimum product distance of the code. © 2013 IEEE.
Open Genetic Code: on open source in the life sciences.
Deibel, Eric
2014-01-01
The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.
Blind cooperative diversity using distributed space-time coding in block fading channels
Tourki, Kamel
2010-08-01
Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel is of an acceptable quality and we evaluate the usefulness of relaying when the source acts blindly and ignores the decision of the relays whether they may cooperate or not. In our study, we consider the regenerative relays in which the decisions to cooperate are based on a signal-to-noise ratio (SNR) threshold and consider the impact of the possible erroneously detected and transmitted data at the relays. We derive the end-to-end bit-error rate (BER) expression and its approximation for binary phase-shift keying modulation and look at two power allocation strategies between the source and the relays in order to minimize the end-to-end BER at the destination for high SNR. Some selected performance results show that computer simulations based results coincide well with our analytical results. © 2010 IEEE.
Abediseid, Walid
2012-12-21
The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.
1988-01-01
The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy
Multi-Channel Tunable Source for Atomic Sensors, Phase II
National Aeronautics and Space Administration — This Phase II SBIR will seek to develop a prototype laser source suitable for atomic interferometry from compact, robust, integrated components. AdvR's design is...
Energy Technology Data Exchange (ETDEWEB)
Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-07-09
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.
Kim, Daehee; Kim, Dongwan; An, Sunshin
2016-01-01
Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616
Building guide : how to build Xyce from source code.
Energy Technology Data Exchange (ETDEWEB)
Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.
2013-08-01
While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.
Code of conduct on the safety and security of radioactive sources
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-03-01
The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.
Natarajan Meghanathan
2013-01-01
The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...
Code of conduct on the safety and security of radioactive sources
International Nuclear Information System (INIS)
2001-03-01
The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third
Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2014-01-01
In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...
Test of Effective Solid Angle code for the efficiency calculation of volume source
Energy Technology Data Exchange (ETDEWEB)
Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.
Source and Channel Choices in Business-to-Government Service Interactions: A Vignette Study
van den Boer, Yvon; Pieterson, Willem Jan; Arendsen, R.; de Groot, Manon; Janssen, Marijn; Scholl, Hans Jochen; Wimmer, Maria A.; Bannister, Frank
2014-01-01
To deal with tax matters, businesses have various potential sources (e.g., Tax Office, advisor, industry organization, friends/family) in their environment. Those sources can be coupled with an increasingly wide variety of channels (e.g., telephone, face-to-face, website, e-mail) through which
Comparison of different source calculations in two-nucleon channel at large quark mass
Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu
2018-03-01
We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.
International Nuclear Information System (INIS)
Miedl, H.
1998-01-01
Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)
Use of source term code package in the ELEBRA MX-850 system
International Nuclear Information System (INIS)
Guimaraes, A.C.F.; Goes, A.G.A.
1988-12-01
The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt
Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry
DEFF Research Database (Denmark)
Stickland, Matt; Scanlon, Tom; Fabre, Sylvie
2009-01-01
Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....
Single-channel source separation using non-negative matrix factorization
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard
-determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...... papers. The first part introduces the single-channel source separation problem as well as non-negative matrix factorization and provides a comprehensive review of existing approaches, applications, and practical algorithms. This serves to provide context for the second part, the published papers......, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging....
Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.
Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile
2016-01-01
This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.
Code of conduct on the safety and security of radioactive sources
International Nuclear Information System (INIS)
2004-01-01
The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories
Code of conduct on the safety and security of radioactive sources
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-01-01
The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.
Positron energy distributions from a hybrid positron source based on channeling radiation
International Nuclear Information System (INIS)
Azadegan, B.; Mahdipour, A.; Dabagov, S.B.; Wagner, W.
2013-01-01
A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e + e − -pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e + e − -pairs and the energy distributions of positrons are simulated using the GEANT4 package
Lysimeter data as input to performance assessment source term codes
International Nuclear Information System (INIS)
McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.
1992-01-01
The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed
SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation
International Nuclear Information System (INIS)
Longsine, D.E.
1987-03-01
SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.
2016-01-01
, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet
An efficient chaotic source coding scheme with variable-length blocks
International Nuclear Information System (INIS)
Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong
2011-01-01
An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)
Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.
Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding
Directory of Open Access Journals (Sweden)
Haider M. AlSabbagh
2012-03-01
Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.
An analytical model for perpetual network codes in packet erasure channels
DEFF Research Database (Denmark)
Pahlevani, Peyman; Crisostomo, Sergio; Roetter, Daniel Enrique Lucani
2016-01-01
is highly dependent on a parameter called the width (ωω), which represents the number of consecutive non-zero coding coefficient present in each coded packet after a pivot element. We provide a mathematical analysis based on the width of the coding vector for the number of transmitted packets and validate...
Optimal coding-decoding for systems controlled via a communication channel
Yi-wei, Feng; Guo, Ge
2013-12-01
In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.
Martian outflow channels: How did their source aquifers form, and why did they drain so rapidly?
Rodriguez, J Alexis P; Kargel, Jeffrey S; Baker, Victor R; Gulick, Virginia C; Berman, Daniel C; Fairén, Alberto G; Linares, Rogelio; Zarroca, Mario; Yan, Jianguo; Miyamoto, Hideaki; Glines, Natalie
2015-09-08
Catastrophic floods generated ~3.2 Ga by rapid groundwater evacuation scoured the Solar System's most voluminous channels, the southern circum-Chryse outflow channels. Based on Viking Orbiter data analysis, it was hypothesized that these outflows emanated from a global Hesperian cryosphere-confined aquifer that was infused by south polar meltwater infiltration into the planet's upper crust. In this model, the outflow channels formed along zones of superlithostatic pressure generated by pronounced elevation differences around the Highland-Lowland Dichotomy Boundary. However, the restricted geographic location of the channels indicates that these conditions were not uniform. Furthermore, some outflow channel sources are too high to have been fed by south polar basal melting. Using more recent mission data, we argue that during the Late Noachian fluvial and glacial sediments were deposited into a clastic wedge within a paleo-basin located in the southern circum-Chryse region, which at the time was completely submerged under a primordial northern plains ocean [corrected]. Subsequent Late Hesperian outflow channels were sourced from within these geologic materials and formed by gigantic groundwater outbursts driven by an elevated hydraulic head from the Valles Marineris region. Thus, our findings link the formation of the southern circum-Chryse outflow channels to ancient marine, glacial, and fluvial erosion and sedimentation.
Experimental Study of a Positron\\\\ Source Using Channeling
Gavrykov, V; Kulibaba, V; Baier, V; Beloborodov, K; Bojenok, A; Bukin, A; Burdin, S; Dimova, T; Druzhinin, V; Dubrovin, M; Seredniakov, S; Shary, V; Strakhovenko, V; Keppler, P; Major, J; Bogdanov, A V; Potylitsin, A; Vnoukov, I; Artru, X; Lautesse, P; Poizat, J-C; Remillieux, J
2002-01-01
Many simulations have predicted that the yield of positrons, resulting from the interaction of fast electrons in a solid target, increases if the target is a crystal oriented with a major axis parallel to the electron beam. Tests made at Orsay and Tokyo confirmed these expectations. The experiment WA 103 concerns the determination of the main characteristics (emittance, energy spread) of a crystal positron source which could replace advantageously the conventional positron converters foreseen in some linear collider projects. The main element of the set-up is a magnetic spectrometer, using a drift chamber, where the positron trajectories are reconstructed (see Figure 1) A first run has been operated in july 2000 and the first results showed, as expected, a significant enhancement in photon and positron generation along the $$ axis of the tungsten crystal. Indications about a significant increase in the number of soft photons and positrons were also gathered : this point is of importance for the positron colle...
International Nuclear Information System (INIS)
Gomes, Renato G.; Rebello, Wilson F.; Vellozo, Sergio O.; Moreira Junior, Luis; Vital, Helio C.; Rusin, Tiago; Silva, Ademir X.
2013-01-01
In order to evaluate new lines of research in the area of irradiation of materials external to the research irradiating facility Army Technology Center (CTEx), it is necessary to study security parameters and magnitude of the dose rates from their channels of escape. The objective was to calculate, with the code MCNPX, dose rates (Gy / min) on the interior and exterior of the four-channel leakage gamma irradiator. The channels were designed to leak radiation on materials properly disposed in the area outside the irradiator larger than the expected volume of irradiation chambers (50 liters). This study aims to assess the magnitude of dose rates within the channels, as well as calculate the angle of beam output range outside the channel for analysis as to its spread, and evaluation of safe conditions of their operators (protection radiological). The computer simulation was performed by distributing virtual dosimeter ferrous sulfate (Fricke) in the longitudinal axis of the vertical drain channels (anterior and posterior) and horizontal (top and bottom). The results showed a collimating the beams irradiated on each of the channels to the outside, with values of the order of tenths of Gy / min as compared to the maximum amount of operation of the irradiator chamber (33 Gy / min). The external beam irradiation in two vertical channels showed a distribution shaped 'trunk pyramid', not collimated, so scattered, opening angle 83 ° in the longitudinal direction and 88 in the transverse direction. Thus, the cases allowed the evaluation of materials for irradiation outside the radiator in terms of the magnitude of the dose rates and positioning of materials, and still be able to take the necessary care in mounting shield for radiation protection by operators, avoiding exposure to ionizing radiation. (author)
Fine-Grained Energy Modeling for the Source Code of a Mobile Application
DEFF Research Database (Denmark)
Li, Xueliang; Gallagher, John Patrick
2016-01-01
The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...
Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT
Energy Technology Data Exchange (ETDEWEB)
Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)
2016-11-01
Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira
2016-07-28
One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.
IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes
International Nuclear Information System (INIS)
Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L
2015-01-01
In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)
A simplified model of the source channel of the Leksell GammaKnife (registered) tested with PENELOPE
International Nuclear Information System (INIS)
Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel
2004-01-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife (registered) . The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 deg. with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between ρ = (x 2 + y 2 ) 1/2 and their polar angle θ, on one side, and between tan -1 (y/x) and their azimuthal angle φ, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time
A simplified model of the source channel of the Leksell GammaKnife (registered) tested with PENELOPE
Energy Technology Data Exchange (ETDEWEB)
Al-Dweri, Feras M O [Departamento de FIsica Moderna, Universidad de Granada, E-18071 Granada (Spain); Lallena, Antonio M [Departamento de FIsica Moderna, Universidad de Granada, E-18071 Granada (Spain); Vilches, Manuel [Servicio de RadiofIsica, Hospital ClInico ' San Cecilio' , Avda. Dr Oloriz, 16, E-18012 Granada (Spain)
2004-06-21
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife (registered) . The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 deg. with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between {rho} = (x{sup 2} + y{sup 2}){sup 1/2} and their polar angle {theta}, on one side, and between tan{sup -1}(y/x) and their azimuthal angle {phi}, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE
Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel
2004-06-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele
2014-01-01
The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...
Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels
Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Patzold, Matthias Uwe; Balasingham, Ilangko
2018-01-01
-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the
International Nuclear Information System (INIS)
Chen, K.F.; Olson, C.A.
1983-01-01
One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction
Integrated source and channel encoded digital communication system design study. [for space shuttles
Huth, G. K.
1976-01-01
The results of several studies Space Shuttle communication system are summarized. These tasks can be divided into the following categories: (1) phase multiplexing for two- and three-channel data transmission, (2) effects of phase noise on the performance of coherent communication links, (3) analysis of command system performance, (4) error correcting code tradeoffs, (5) signal detection and angular search procedure for the shuttle Ku-band communication system, and (6) false lock performance of Costas loop receivers.
National Research Council Canada - National Science Library
Ong, Choon
1998-01-01
The performance analysis of a differential phase shift keyed (DPSK) communications system, operating in a Rayleigh fading environment, employing convolutional coding and diversity processing is presented...
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-15
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer.
International Nuclear Information System (INIS)
Kim, Hyoung Tae; Park, Joo Hwan; Rhee, Bo Wook
2006-07-01
To justify the use of a commercial Computational Fluid Dynamics (CFD) code for a CANDU fuel channel analysis, especially for the radiation heat transfer dominant conditions, the CFX-10 code is tested against three benchmark problems which were used for the validation of a radiation heat transfer in the CANDU analysis code, a CATHENA. These three benchmark problems are representative of the CANDU fuel channel configurations from a simple geometry to whole fuel channel geometry. With assumptions of a non-participating medium completely enclosed with the diffuse, gray and opaque surfaces, the solutions of the benchmark problems are obtained by the concept of surface resistance to radiation accounting for the view factors and the emissivities. The view factors are calculated by the program MATRIX version 1.0 avoiding the difficulty of hand calculation for the complex geometries. For the solutions of the benchmark problems, the temperature or the net radiation heat flux boundary conditions are prescribed for each radiating surface to determine the radiation heat transfer rate or the surface temperature, respectively by using the network method. The Discrete Transfer Model (DTM) is used for the CFX-10 radiation model and its calculation results are compared with the solutions of the benchmark problems. The CFX-10 results for the three benchmark problems are in close agreement with these solutions, so it is concluded that the CFX-10 with a DTM radiation model can be applied to the CANDU fuel channel analysis where a surface radiation heat transfer is a dominant mode of the heat transfer
Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources
International Nuclear Information System (INIS)
Wheatley, J. S.
2004-01-01
The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs
TRANTHAC-1: transient thermal-hydraulic analysis code for HTGR core of multi-channel model
International Nuclear Information System (INIS)
Sato, Sadao; Miyamoto, Yoshiaki
1980-08-01
The computer program TRANTHAC-1 is for predicting thermal-hydraulic transient behavior in HTGR's core of pin-in-block type fuel elements, taking into consideration of the core flow distribution. The program treats a multi-channel model, each single channel representing the respective column composed of fuel elements. The fuel columns are grouped in flow control regions; each region is provided with an orifice assembly. In the region, all channels are of the same shape except one channel. Core heat is removed by downward flow of the control through the channel. In any transients, for given time-dependent power, total core flow, inlet coolant temperature and coolant pressure, the thermal response of the core can be determined. In the respective channels, the heat conduction in radial and axial direction are represented. And the temperature distribution in each channel with the components is calculated. The model and usage of the program are described. The program is written in FORTRAN-IV for computer FACOM 230-75 and it is composed of about 4,000 cards. The required core memory is about 75 kilowords. (author)
Channel estimation for space-time trellis coded-OFDM systems based on nonoverlapping pilot structure
CSIR Research Space (South Africa)
Sokoya, O
2008-09-01
Full Text Available . Through the analysis, two extreme conditions that produce the largest minimum determinant for a STTC-OFDM over multiple-tap channels were pointed out. The analysis show that the performance of the STTC-OFDM under various channel condition is based on...: 1) the minimum determinant tap delay of the channel and 2) the memory order of the STTC. New STTC-OFDM schemes were later designed in [2] taking into account some of the designed criteria shown in [1]. The STTC-OFDM schemes are capable...
International Nuclear Information System (INIS)
Ibrahim, Amr; PredoiCross, Adriana; Teillet, P. M.
2010-01-01
Seven different techniques in dealing the problem of channel spectra in Fourier transform Spectroscopy utilizing synchrotron source were examined and compared. Five of these techniques deal with the artifacts (spikes) in the recorded interferogram which in turn result in channel spectra within the spectral domain. Such interferogram editing method include replacing these spikes with zeros, straight line, fitted polynomial curve, rescaled spike and spike reduced with Gauss Function. Another two techniques try to target this issue in the spectral domain instead by either generating a synthetic background simulating the channels or measuring the channels parameters (amplitude, spacing and phase) to use in the spectral fitting program. Results showed spectral domain techniques produces higher quality results in terms of signal to noise and fitting residual. The effect of each method on the line parameters such as position, intensity are air broadening are also measured and discussed.
Mineral compositions and sources of the riverbed sediment in the desert channel of Yellow River.
Jia, Xiaopeng; Wang, Haibing
2011-02-01
The Yellow River flows through an extensive, aeolian desert area and extends from Xiaheyan, Ningxia Province, to Toudaoguai, Inner Mongolia Province, with a total length of 1,000 km. Due to the construction and operation of large reservoirs in the upstream of the Yellow River, most water and sediment from upstream were stored in these reservoirs, which leads to the declining flow in the desert channel that has no capability to scour large amount of input of desert sands from the desert regions. By analyzing and comparing the spatial distribution of weight percent of mineral compositions between sediment sources and riverbed sediment of the main tributaries and the desert channel of the Yellow River, we concluded that the coarse sediment deposited in the desert channel of the Yellow River were mostly controlled by the local sediment sources. The analyzed results of the Quartz-Feldspar-Mica (QFM) triangular diagram and the R-factor models of the coarse sediment in the Gansu reach and the desert channel of the Yellow River further confirm that the Ningxia Hedong desert and the Inner Mongolian Wulanbuhe and Kubuqi deserts are the main provenances of the coarse sediment in the desert channel of the Yellow River. Due to the higher fluidity of the fine sediment, they are mainly contributed by the local sediment sources and the tributaries that originated from the loess area of the upper reach of the Yellow River.
International Nuclear Information System (INIS)
Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon
2006-01-01
An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors
Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code
DEFF Research Database (Denmark)
Olszak, Andrzej; Jørgensen, Bo Nørregaard
2011-01-01
Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...
International Nuclear Information System (INIS)
Delagrange, H.
1977-01-01
This report is the user manual of the GR0GI-F code, modified version of the GR0GI-2 code. It calculates the cross sections for heavy ion induced fission. Fission probabilities are calculated via the Bohr-Wheeler formalism
Low-Complexity Iterative Receiver for Space-Time Coded Signals over Frequency Selective Channels
Directory of Open Access Journals (Sweden)
Mohamed Siala
2002-05-01
Full Text Available We propose a low-complexity turbo-detector scheme for frequency selective multiple-input multiple-output channels. The detection part of the receiver is based on a List-type MAP equalizer which is a state-reduction algorithm of the MAP algorithm using per-survivor technique. This alternative achieves a good tradeoff between performance and complexity provided a small amount of the channel is neglected. In order to induce the good performance of this equalizer, we propose to use a whitened matched filter (WMF which leads to a white-noise Ã¢Â€Âœminimum phaseÃ¢Â€Â channel model. Simulation results show that the use of the WMF yields significant improvement, particularly over severe channels. Thanks to the iterative turbo processing (detection and decoding are iterated several times, the performance loss due to the use of the suboptimum List-type equalizer is recovered.
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392
International Nuclear Information System (INIS)
Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep
2015-01-01
Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)
Code of conduct on the safety and security of radioactive sources
International Nuclear Information System (INIS)
Anon.
2001-01-01
The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)
Boundary Layer Fluid Flow in a Channel with Heat Source, Soret ...
African Journals Online (AJOL)
The boundary layer fluid flow in a channel with heat source, soret effects and slip condition was studied. The governing equations were solved using perturbation technique. The effects of different parameters such Prandtl number Pr , Hartmann number M, Schmidt number Sc, suction parameter ƒÜ , soret number Sr and the ...
Directory of Open Access Journals (Sweden)
Stevan M. Berber
2014-06-01
Code Division Multiple Access (CDMA technique which allows communications of multiple users in the same communication system. This is achieved in such a way that each user is assigned a unique code sequence, which is used at the receiver side to discover the information dedicated to that user. These systems belong to the group of communication systems for direct sequence spread spectrum systems. Traditionally, CDMA systems use binary orthogonal spreading codes. In this paper, a mathematical model and simulation of a CDMA system based on the application of non-binary, precisely speaking, chaotic spreading sequences. In their nature, these sequences belong to random sequences with infinite periodicity, and due to that they are appropriate for applications in the systems that provide enhanced security against interception and secrecy in signal transmission. Numerous papers are dedicated to the development of CDMA systems in flat fading channels. This paper presents the results of these systems analysis for the case when frequency selective fading is present in the channel. In addition, the paper investigates a possibility of using interleaving techniques to mitigate fading in a wideband channel with the frequency selective fading. Basic structure of a CDMA communication system and its operation In this paper, a CDMA system block schematic is ppresented and the function of all blocks is explained. Notation to be used in the paper is introduced. Chaotic sequences are defined and explained in accordance with the method of their generation. A wideband channel with frequency selective fading is defined by its impulse response function. Theoretical analysis of a CDMA system with flat fading in a narrowband channel A narrowband channel and flat fading are defined. A mathematical analysis of the system is conducted by presenting the signal expressions at vital points in the transmitter and receiver. The expression of the signal at the output of the sequence correlator is
Multiple Speech Source Separation Using Inter-Channel Correlation and Relaxed Sparsity
Directory of Open Access Journals (Sweden)
Maoshen Jia
2018-01-01
Full Text Available In this work, a multiple speech source separation method using inter-channel correlation and relaxed sparsity is proposed. A B-format microphone with four spatially located channels is adopted due to the size of the microphone array to preserve the spatial parameter integrity of the original signal. Specifically, we firstly measure the proportion of overlapped components among multiple sources and find that there exist many overlapped time-frequency (TF components with increasing source number. Then, considering the relaxed sparsity of speech sources, we propose a dynamic threshold-based separation approach of sparse components where the threshold is determined by the inter-channel correlation among the recording signals. After conducting a statistical analysis of the number of active sources at each TF instant, a form of relaxed sparsity called the half-K assumption is proposed so that the active source number in a certain TF bin does not exceed half the total number of simultaneously occurring sources. By applying the half-K assumption, the non-sparse components are recovered by regarding the extracted sparse components as a guide, combined with vector decomposition and matrix factorization. Eventually, the final TF coefficients of each source are recovered by the synthesis of sparse and non-sparse components. The proposed method has been evaluated using up to six simultaneous speech sources under both anechoic and reverberant conditions. Both objective and subjective evaluations validated that the perceptual quality of the separated speech by the proposed approach outperforms existing blind source separation (BSS approaches. Besides, it is robust to different speeches whilst confirming all the separated speeches with similar perceptual quality.
Performance analysis for a chaos-based code-division multiple access system in wide-band channel
Directory of Open Access Journals (Sweden)
Ciprian Doru Giurcăneanu
2015-08-01
Full Text Available Code-division multiple access technology is widely used in telecommunications and its performance has been extensively investigated in the past. Theoretical results for the case of wide-band transmission channel were not available until recently. The novel formulae which have been published in 2014 can have an important impact on the future of wireless multiuser communications, but limitations come from the Gaussian approximations used in their derivation. In this Letter, the authors obtain more accurate expressions of the bit error rate (BER for the case when the model of the wide-band channel is two-ray, with Rayleigh fading. In the authors’ approach, the spreading sequences are assumed to be generated by logistic map given by Chebyshev polynomial function of order two. Their theoretical and experimental results show clearly that the previous results on BER, which rely on the crude Gaussian approximation, are over-pessimistic.
Documentation for grants equal to tax model: Volume 3, Source code
International Nuclear Information System (INIS)
Boryczka, M.K.
1986-01-01
The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations
The DVB Channel Coding Application Using the DSP Development Board MDS TM-13 IREF
Directory of Open Access Journals (Sweden)
M. Slanina
2004-12-01
Full Text Available The paper deals with the implementation of the channel codingaccording to DVB standard on DSP development board MDS TM-13 IREF andPC. The board is based on Philips Nexperia media processor andintegrates hardware video ADC and DAC. The program libraries featuresused for MPEG based video compression are outlined and then thealgorithms of channel decoding (FEC protection against errors arepresented including the flowchart diagrams. The paper presents thepartial hardware implementation of the simulation system that coversselected phenomena of DVB baseband processing and it is used for realtime interactive demonstration of error protection influence ontransmitted digital video in laboratory and education.
WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection
Directory of Open Access Journals (Sweden)
Deqiang Fu
2017-01-01
Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.
Rascal: A domain specific language for source code analysis and manipulation
P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe
2009-01-01
htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This
RASCAL : a domain specific language for source code analysis and manipulationa
Klint, P.; Storm, van der T.; Vinju, J.J.
2009-01-01
Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance
From system requirements to source code: transitions in UML and RUP
Directory of Open Access Journals (Sweden)
Stanisław Wrycza
2011-06-01
Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be
Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD
International Nuclear Information System (INIS)
Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.
2009-01-01
This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)
Coded moderator approach for fast neutron source detection and localization at standoff
Energy Technology Data Exchange (ETDEWEB)
Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)
2015-06-01
Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.
van den Boer, Yvon; Pieterson, Willem Jan; van Dijk, Johannes A.G.M.; Arendsen, R.
2016-01-01
With the rise of electronic channels it has become easier for businesses to consult various types of information sources in information-seeking processes. Governments are urged to rethink their role as reliable information source and the roles of their (electronic) service channels to provide
van den Boer, Yvon; Pieterson, Willem; Arendsen, Rex; van Dijk, Jan
With a growing number of available communication channels and the increasing role of other information sources, organizations are urged to rethink their service strategies. Most theories are limited to a one-dimensional focus on source or channel choice and do not fit into today's networked
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
International Nuclear Information System (INIS)
Heeb, C.M.
1991-03-01
The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs
Code of practice for the use of sealed radioactive sources in borehole logging (1998)
International Nuclear Information System (INIS)
1989-12-01
The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills
Iterative channel decoding of FEC-based multiple-description codes.
Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B
2012-03-01
Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.
Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping
Á. Piedrafita (Álvaro); J.M. Renes (Joseph)
2017-01-01
textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve
Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma
Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.
2017-10-01
For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that
Directory of Open Access Journals (Sweden)
Valenzise G
2009-01-01
Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
Directory of Open Access Journals (Sweden)
Dayong Zhou
2008-12-01
Full Text Available Tsatsanis and Xu have applied the constrained minimum output variance (CMOV principle to directly blind equalize a linear channelÃ¢Â€Â”a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
International Nuclear Information System (INIS)
Perry, R.T.; Wilson, W.B.; Charlton, W.S.
1998-04-01
In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2010-01-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth & Pope with Durbin's method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous ...
Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD
International Nuclear Information System (INIS)
Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.
2009-01-01
The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)
Imaging x-ray sources at a finite distance in coded-mask instruments
International Nuclear Information System (INIS)
Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri
2008-01-01
We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction
Galvanically Decoupled Current Source Modules for Multi-Channel Bioimpedance Measurement Systems
Directory of Open Access Journals (Sweden)
Roman Kusche
2017-10-01
Full Text Available Bioimpedance measurements have become a useful technique in the past several years in biomedical engineering. Especially, multi-channel measurements facilitate new imaging and patient monitoring techniques. While most instrumentation research has focused on signal acquisition and signal processing, this work proposes the design of an excitation current source module that can be easily implemented in existing or upcoming bioimpedance measurement systems. It is galvanically isolated to enable simultaneous multi-channel bioimpedance measurements with a very low channel-coupling. The system is based on a microcontroller in combination with a voltage-controlled current source circuit. It generates selectable sinusoidal excitation signals between 0.12 and 1.5 mA in a frequency range from 12 to 250 kHz, whereas the voltage compliance range is ±3.2 V. The coupling factor between two current sources, experimentally galvanically connected with each other, is measured to be less than −48 dB over the entire intended frequency range. Finally, suggestions for developments in the future are made.
The coupling algorithm between fuel pin and coolant channel in the European Accident Code EAC-2
International Nuclear Information System (INIS)
Goethem, G. van; Lassmann, K.
1989-01-01
In the field of fast breeder reactors the Commission of the European Communities (CEC) is conducting coordination and harmonisation activities as well as its own research at the CEC's Joint Research Centre (JRC). The development of the modular European Accident Code (EAC) is a typical example of concerted action between EC Member States performed under the leadership of the JRC. This computer code analyzes the initiation phase of low-probability whole-core accidents in LMFBRs with the aim of predicting the rapidity of sodium voiding, the mode of pin failure, the subsequent fuel redistribution and the associated energy release. This paper gives a short overview on the development of the EAC-2 code with emphasis on the coupling mechanism between the fuel behaviour module TRANSURANUS and the thermohydraulics modules which can be either CFEM or BLOW3A. These modules are also briefly described. In conclusion some numerical results of EAC-2 are given: they are recalculations of an unprotected LOF accident for the fictitious EUROPE fast breeder reactor which was earlier analysed in the frame of a comparative exercise performed in the early 80s and organised by the CEC. (orig.)
A plug-in to Eclipse for VHDL source codes: functionalities
Niton, B.; Poźniak, K. T.; Romaniuk, R. S.
The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.
On the relationship between perceptual impact of source and channel distortions in video sequences
DEFF Research Database (Denmark)
Korhonen, Jari; Reiter, Ulrich; You, Junyong
2010-01-01
It is known that peak signal-to-noise ratio (PSNR) can be used for assessing the relative qualities of distorted video sequences meaningfully only if the compared sequences contain similar types of distortions. In this paper, we propose a model for rough assessment of the bias in PSNR results, when...... video sequences with both channel and source distortion are compared against video sequences with source distortion only. The proposed method can be used to compare the relative perceptual quality levels of video sequences with different distortion types more reliably than using plain PSNR....
Beyond the Business Model: Incentives for Organizations to Publish Software Source Code
Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti
The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.
TRAN.1 - a code for transient analysis of temperature distribution in a nuclear fuel channel
International Nuclear Information System (INIS)
Bukhari, K.M.
1990-09-01
A computer program has been written in FORTRAN that solves the time dependent energy conservation equations in a nuclear fuel channel. As output from the program we obtained the temperature distribution in the fuel, cladding and coolant as a function of space and time. The stability criteria have also been developed. A set of finite difference equations for the steady state temperature distribution have also been incorporated in this program. A number of simplifications have been made in this version of the program. Thus at present, TRAN.1 uses constant thermodynamics properties and heat transfer coefficient at fuel cladding gap, has absence of phase change and pressure loss in the coolant, and there is no change in properties due to changes in burnup etc. These effects are now in the process of being included in the program. The current version of program should therefore be taken as a fuel channel, and this report should be considered as a status report on this program. (orig./A.B.)
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
Introduction to coding and information theory
Roman, Steven
1997-01-01
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
Survey of source code metrics for evaluating testability of object oriented systems
Shaheen , Muhammad Rabee; Du Bousquet , Lydie
2010-01-01
Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...
International Nuclear Information System (INIS)
Broadhead, B.L.; Locke, H.F.; Avery, A.F.
1994-01-01
The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations
Directory of Open Access Journals (Sweden)
F. Genc
2014-09-01
Full Text Available The purpose of this paper is to compare the turbo-coded Orthogonal Frequency Division Multiplexing (OFDM and turbo-coded Single Carrier Frequency Domain Equalization (SC-FDE systems under the effects of Carrier Frequency Offset (CFO, Symbol Timing Offset (STO and phase noise in wide-band Vogler-Hoffmeyer HF channel model. In mobile communication systems multi-path propagation occurs. Therefore channel estimation and equalization is additionally necessary. Furthermore a non-ideal local oscillator generally is misaligned with the operating frequency at the receiver. This causes carrier frequency offset. Hence in coded SC-FDE and coded OFDM systems; a very efficient, low complex frequency domain channel estimation and equalization is implemented in this paper. Also Cyclic Prefix (CP based synchronization synchronizes the clock and carrier frequency offset.The simulations show that non-ideal turbo-coded OFDM has better performance with greater diversity than non-ideal turbo-coded SC-FDE system in HF channel.
Source-term model for the SYVAC3-NSURE performance assessment code
International Nuclear Information System (INIS)
Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.
1996-11-01
Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
Energy Technology Data Exchange (ETDEWEB)
Jaeger, Wadim; Manes, Jorge Perez; Imke, Uwe; Escalante, Javier Jimenez; Espinoza, Victor Sanchez, E-mail: victor.sanchez@kit.edu
2013-10-15
Highlights: • Simulation of BFBT turbine and pump transients at multiple scales. • CFD, sub-channel and system codes are used for the comparative study. • Heat transfer models are compared to identify difference between the code predictions. • All three scales predict results in good agreement to experiment. • Sub cooled boiling models are identified as field for future research. -- Abstract: The Institute for Neutron Physics and Reactor Technology (INR) at the Karlsruhe Institute of Technology (KIT) is involved in the validation and qualification of modern thermo hydraulic simulations tools at various scales. In the present paper, the prediction capabilities of four codes from three different scales – NEPTUNE{sub C}FD as fine mesh computational fluid dynamics code, SUBCHANFLOW and COBRA-TF as sub channels codes and TRACE as system code – are assessed with respect to their two-phase flow modeling capabilities. The subject of the investigations is the well-known and widely used data base provided within the NUPEC BFBT benchmark related to BWRs. Void fraction measurements simulating a turbine and a re-circulation pump trip are provided at several axial levels of the bundle. The prediction capabilities of the codes for transient conditions with various combinations of boundary conditions are validated by comparing the code predictions with the experimental data. In addition, the physical models of the different codes are described and compared to each other in order to explain the different results and to identify areas for further improvements.
Energy Technology Data Exchange (ETDEWEB)
Liu, Chun-Ho [The Hong Kong Polytechnic University, Kowloon (Hong Kong). Department of Building and Real Estate; Leung, Dennis Y.C. [The University of Hong Kong (Hong Kong). Department of Mechanical Engineering
2006-11-15
This study employs a direct numerical simulation (DNS) technique to study the flow, turbulence structure, and passive scalar plume transport behind line sources in an unstably stratified open channel flow. The scalar transport behaviors for five emission heights (z{sub s}=0, 0.25H, 0.5H, 0.75H, and H, where H is the channel height) at a Reynolds number of 3000, a Prandtl number and a Schmidt number of 0.72, and a Richardson number of -0.2 are investigated. The vertically meandering mean plume heights and dispersion coefficients calculated by the current DNS model agree well with laboratory results and field measurements in literature. It is found that the plume meandering is due to the movement of the positive and negative vertical turbulent scalar fluxes above and below the mean plume heights, respectively. These findings help explaining the plume meandering mechanism in the unstably stratified atmospheric boundary layer. (author)
Two-channel Hyperspectral LiDAR with a Supercontinuum Laser Source
Directory of Open Access Journals (Sweden)
Ruizhi Chen
2010-07-01
Full Text Available Recent advances in nonlinear fiber optics and compact pulsed lasers have resulted in creation of broadband directional light sources. These supercontinuum laser sources produce directional broadband light using cascaded nonlinear optical interactions in an optical fibre framework. This system is used to simultaneously measure distance and reflectance to demonstrate a technique capable of distinguishing between a vegetation target and inorganic material using the Normalized Difference Vegetation Index (NDVI parameters, while the range can be obtained from the waveform of the echoes. A two-channel, spectral range-finding system based on a supercontinuum laser source was used to determine its potential application of distinguishing the NDVI for Norway spruce, a coniferous tree, and its three-dimensional parameters at 600 nm and 800 nm. A prototype system was built using commercial components.
International Nuclear Information System (INIS)
Souto, F.J.
1991-06-01
The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs
The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER
International Nuclear Information System (INIS)
Schmidt, F.; Schuch, A.; Hinkelmann, M.
1993-04-01
The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de
GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit
International Nuclear Information System (INIS)
Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik
2014-01-01
We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)
Sensitivity analysis and benchmarking of the BLT low-level waste source term code
International Nuclear Information System (INIS)
Suen, C.J.; Sullivan, T.M.
1993-07-01
To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis
Directory of Open Access Journals (Sweden)
Swapnil Mhaske
2017-01-01
Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.
Hillyer, Grace Clarke; Schmitt, Karen M; Lizardo, Maria; Reyes, Andria; Bazan, Mercedes; Alvarez, Maria C; Sandoval, Rossy; Abdul, Kazeem; Orjuela, Manuela A
2017-04-01
Understanding key health concepts is crucial to participation in Precision Medicine initiatives. In order to assess methods to develop and disseminate a curriculum to educate community members in Northern Manhattan about Precision Medicine, clients from a local community-based organization were interviewed during 2014-2015. Health literacy, acculturation, use of Internet, email, and text messaging, and health information sources were assessed. Associations between age and outcomes were evaluated; multivariable analysis used to examine the relationship between participant characteristics and sources of health information. Of 497 interviewed, 29.4 % had inadequate health literacy and 53.6 % had access to the Internet, 43.9 % to email, and 45.3 % to text messaging. Having adequate health literacy was associated with seeking information from a healthcare professional (OR 2.59, 95 % CI 1.54-4.35) and from the Internet (OR 3.15, 95 % CI 1.97-5.04); having ≤ grade school education (OR 2.61, 95 % CI 1.32-5.17) also preferred information from their provider; persons >45 years (OR 0.29, 95 % CI 0.18-0.47) were less likely to use the Internet for health information and preferred printed media (OR 1.64, 95 % CI 1.07-2.50). Overall, electronic communication channel use was low and varied significantly by age with those ≤45 years more likely to utilize electronic channels. Preferred sources of health information also varied by age as well as by health literacy and educational level. This study demonstrates that to effectively communicate key Precision Medicine concepts, curriculum development for Latino community members of Northern Manhattan will require attention to health literacy, language preference and acculturation and incorporate more traditional communication channels for older community members.
Channel Width Change as a Potential Sediment Source, Minnesota River Basin
Lauer, J. W.; Echterling, C.; Lenhart, C. F.; Rausch, R.; Belmont, P.
2017-12-01
Turbidity and suspended sediment are important management considerations along the Minnesota River. The system has experience large and relatively consistent increases in both discharge and channel width over the past century. Here we consider the potential role of channel cross section enlargement as a sediment source. Reach-average channel width was digitized from aerial images dated between 1937 and 2015 along multiple sub-reaches of the Minnesota River and its major tributaries. Many of the sub-reaches include several actively migrating bends. The analysis shows relatively consistent increases in width over time, with average increase rates of 0.4 percent per year. Extrapolation to the river network using a regional relationship for cross-sectional area vs. drainage area indicates that large tributaries and main-stem reaches account for most of the bankfull cross-sectional volume in the basin. Larger tributaries and the main stem thus appear more important for widening related sediment production than small tributaries. On a basin-wide basis, widening could be responsible for a gross supply of more sediment than has been gaged at several main-stem sites, indicating that there may be important sinks for both sand and silt/clay size material distributed throughout the system. Sediment storage is probably largest along the lowest-slope reaches of the main stem. While channel width appears to have adjusted relatively quickly in response to discharge and other hydraulic modifications, net storage of sediment in floodplains probably occurs sufficiently slowly that depth adjustment will lag width adjustment significantly. Detailed analysis of the lower Minnesota River using a river segmenting approach allows for a more detailed assessment of reach-scale processes. Away from channel cutoffs, elongation of the channel at eroding bends is consistent with rates observed on other actively migrating rivers. However, the sinuosity increase has been more than compensated by
A two-channel, spectrally degenerate polarization entangled source on chip
Sansoni, Linda; Luo, Kai Hong; Eigner, Christof; Ricken, Raimund; Quiring, Viktor; Herrmann, Harald; Silberhorn, Christine
2017-12-01
Integrated optics provides the platform for the experimental implementation of highly complex and compact circuits for quantum information applications. In this context integrated waveguide sources represent a powerful resource for the generation of quantum states of light due to their high brightness and stability. However, the confinement of the light in a single spatial mode limits the realization of multi-channel sources. Due to this challenge one of the most adopted sources in quantum information processes, i.e. a source which generates spectrally indistinguishable polarization entangled photons in two different spatial modes, has not yet been realized in a fully integrated platform. Here we overcome this limitation by suitably engineering two periodically poled waveguides and an integrated polarization splitter in lithium niobate. This source produces polarization entangled states with fidelity of F = 0.973 ±0.003 and a test of Bell's inequality results in a violation larger than 14 standard deviations. It can work both in pulsed and continuous wave regime. This device represents a new step toward the implementation of fully integrated circuits for quantum information applications.
Chronos sickness: digital reality in Duncan Jones’s Source Code
Directory of Open Access Journals (Sweden)
Marcia Tiemy Morita Kawamoto
2017-01-01
Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.
Effects of elevated line sources on turbulent mixing in channel flow
Nguyen, Quoc; Papavassiliou, Dimitrios
2016-11-01
Fluids mixing in turbulent flows has been studied extensively, due to the importance of this phenomena in nature and engineering. Convection effects along with motion of three-dimensional coherent structures in turbulent flow disperse a substance more efficiently than molecular diffusion does on its own. We present here, however, a study that explores the conditions under which turbulent mixing does not happen, when different substances are released into the flow field from different vertical locations. The study uses a method which combines Direct Numerical Simulation (DNS) with Lagrangian Scalar Tracking (LST) to simulate a turbulent channel flow and track the motion of passive scalars with different Schmidt numbers (Sc). The particles are released from several instantaneous line sources, ranging from the wall to the center region of the channel. The combined effects of mean velocity difference, molecular diffusion and near-wall coherent structures lead to the observation of different concentrations of particles downstream from the source. We then explore in details the conditions under which particles mixing would not happen. Results from numerical simulation at friction Reynolds number of 300 and 600 will be discussed and for Sc ranging from 0.1 to 2,400.
Ge p-channel tunneling FETs with steep phosphorus profile source junctions
Takaguchi, Ryotaro; Matsumura, Ryo; Katoh, Takumi; Takenaka, Mitsuru; Takagi, Shinichi
2018-04-01
The solid-phase diffusion processes of three n-type dopants, i.e., phosphorus (P), arsenic (As), and antimony (Sb), from spin-on-glass (SOG) into Ge are compared. We show that P diffusion can realize both the highest impurity concentration (˜7 × 1019 cm-3) and the steepest impurity profile (˜10 nm/dec) among the cases of the three n-type dopants because the diffusion coefficient is strongly dependent on the dopant concentration. As a result, we can conclude that P is the most suitable dopant for the source formation of Ge p-channel TFETs. Using this P diffusion, we fabricate Ge p-channel TFETs with high-P-concentration and steep-P-profile source junctions and demonstrate their operation. A high ON current of ˜1.7 µA/µm is obtained at room temperature. However, the subthreshold swing and ON current/OFF current ratio are degraded by any generation-recombination-related current component. At 150 K, SSmin of ˜108 mV/dec and ON/OFF ratio of ˜3.5 × 105 are obtained.
Vanderbauwhede, Wim; Davidson, Gavin
2017-01-01
Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...
International Nuclear Information System (INIS)
Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.
2004-01-01
Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)
New Source Term Model for the RESRAD-OFFSITE Code Version 3
Energy Technology Data Exchange (ETDEWEB)
Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)
2013-06-01
This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.
A statistical–mechanical view on source coding: physical compression and data compression
International Nuclear Information System (INIS)
Merhav, Neri
2011-01-01
We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics
Coded aperture detector for high precision gamma-ray burst source locations
International Nuclear Information System (INIS)
Helmken, H.; Gorenstein, P.
1977-01-01
Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented
International Nuclear Information System (INIS)
Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.
1984-10-01
The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references
RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations
Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy
RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Roy, Debapriya; Biswas, Abhijit
2018-01-01
We develop a 2D analytical subthreshold model for nanoscale double-gate junctionless transistors (DGJLTs) with gate-source/drain underlap. The model is validated using well-calibrated TCAD simulation deck obtained by comparing experimental data in the literature. To analyze and control short-channel effects, we calculate the threshold voltage, drain induced barrier lowering (DIBL) and subthreshold swing of DGJLTs using our model and compare them with corresponding simulation value at channel length of 20 nm with channel thickness tSi ranging 5-10 nm, gate-source/drain underlap (LSD) values 0-7 nm and source/drain doping concentrations (NSD) ranging 5-12 × 1018 cm-3. As tSi reduces from 10 to 5 nm DIBL drops down from 42.5 to 0.42 mV/V at NSD = 1019 cm-3 and LSD = 5 nm in contrast to decrement from 71 to 4.57 mV/V without underlap. For a lower tSiDIBL increases marginally with increasing NSD. The subthreshold swing reduces more rapidly with thinning of channel thickness rather than increasing LSD or decreasing NSD.
International Nuclear Information System (INIS)
1988-01-01
This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr
Recent advances in coding theory for near error-free communications
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
International Nuclear Information System (INIS)
Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee
2005-01-01
The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
Worthmann, Brian M; Song, H C; Dowling, David R
2015-12-01
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.
Study of a positron source generated by photons from ultrarelativistic channeled particles
International Nuclear Information System (INIS)
Chehab, R.; Couchot, F.; Nyaiesh, A.R.; Richard, F.; Artru, X.
1989-03-01
Radiation by channeled electrons in Germanium and Silicon crystals along the axis is studied as a very promising photon source of small angular divergence for positron generation in amorphous targets. Radiation rates for different crystal lengths - from some tenths of mm to 10 mm - and two electron incident energies, 5 and 20 GeV, are considered and a comparison between the two crystals is presented. Thermic behaviour of the crystal under incidence of bunches of 10 10 electrons is also examined. The corresponding positron yields for tungsten amorphous converters - of 0.5 and 1 X o thickness - are calculated considering the case of a Germanium photon generator. Assuming a large acceptance optical matching system as the adiabatic device of the SLC, accepted positrons are evaluated and positron yields larger than 1 e + /e - are obtained
Analysis of Nonlinear Dispersion of a Pollutant Ejected by an External Source into a Channel Flow
Directory of Open Access Journals (Sweden)
T. Chinyoka
2010-01-01
Full Text Available This paper focuses on the transient analysis of nonlinear dispersion of a pollutant ejected by an external source into a laminar flow of an incompressible fluid in a channel. The influence of density variation with pollutant concentration is approximated according to the Boussinesq approximation, and the nonlinear governing equations of momentum and pollutant concentration are obtained. The problem is solved numerically using a semi-implicit finite difference method. Solutions are presented in graphical form and given in terms of fluid velocity, pollutant concentration, skin friction, and wall mass transfer rate for various parametric values. The model can be a useful tool for understanding the polluting situations of an improper discharge incident and evaluating the effects of decontaminating measures for the water body.
Paige, Samantha R; Krieger, Janice L; Stellefson, Michael L
2017-01-01
Disparities in online health information accessibility are partially due to varying levels of eHealth literacy and perceived trust. This study examined the relationship between eHealth literacy and perceived trust in online health communication channels and sources among diverse sociodemographic groups. A stratified sample of Black/African Americans (n = 402) and Caucasians (n = 409) completed a Web-based survey that measured eHealth literacy and perceived trustworthiness of online health communication channels and information sources. eHealth literacy positively predicted perceived trust in online health communication channels and sources, but disparities existed by sociodemographic factors. Segmenting audiences according to eHealth literacy level provides a detailed understanding of how perceived trust in discrete online health communication channels and information sources varies among diverse audiences. Black/African Americans with low eHealth literacy had high perceived trust in YouTube and Twitter, whereas Black/African Americans with high eHealth literacy had high perceived trust in online government and religious organizations. Older adults with low eHealth literacy had high perceived trust in Facebook but low perceived trust in online support groups. Researchers and practitioners should consider the sociodemographics and eHealth literacy level of an intended audience when tailoring information through trustworthy online health communication channels and information sources.
Directory of Open Access Journals (Sweden)
Oscar Karnalim
2017-01-01
Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.
Chelli, Ali; Alouini, Mohamed-Slim
2013-01-01
assume that the transmitter has no channel state information (CSI). Under such conditions, power and rate adaptation are not possible. To overcome this problem, HARQ allows the implicit adaptation of the transmission rate to the channel conditions
Energy Technology Data Exchange (ETDEWEB)
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
1966-09-01
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
Fundamentals of information theory and coding design
Togneri, Roberto
2003-01-01
In a clear, concise, and modular format, this book introduces the fundamental concepts and mathematics of information and coding theory. The authors emphasize how a code is designed and discuss the main properties and characteristics of different coding algorithms along with strategies for selecting the appropriate codes to meet specific requirements. They provide comprehensive coverage of source and channel coding, address arithmetic, BCH, and Reed-Solomon codes and explore some more advanced topics such as PPM compression and turbo codes. Worked examples and sets of basic and advanced exercises in each chapter reinforce the text's clear explanations of all concepts and methodologies.
Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan
2016-01-01
The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.
RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code
Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik
The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.
Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding
Directory of Open Access Journals (Sweden)
Yongjian Nian
2013-01-01
Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data
Directory of Open Access Journals (Sweden)
Isaac Caicedo-Castro
2014-01-01
Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.
Energy Technology Data Exchange (ETDEWEB)
Bartzis, J G; Megaritou, A; Belessiotis, V
1987-09-01
THEAP-I is a computer code developed in NRCPS `DEMOCRITUS` with the aim to contribute to the safety analysis of the open pool research reactors. THEAP-I is designed for three dimensional, transient thermal/hydraulic analysis of a thermally interacting channel bundle totally immersed into water or air, such as the reactor core. In the present report the mathematical and physical models and methods of the solution are given as well as the code description and the input data. A sample problem is also included, refering to the Greek Research Reactor analysis, under an hypothetical severe loss of coolant accident.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen
2018-05-25
Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection
Directory of Open Access Journals (Sweden)
Hongbo Zhao
2018-05-01
Full Text Available Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR, complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS and BeiDou Navigation Satellite System (BDS adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST. This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher
Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code
Faghihi, F.; Mehdizadeh, S.; Hadad, K.
Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.
International Nuclear Information System (INIS)
Khattab, K.; Dawahra, S.
2011-01-01
Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)
Directory of Open Access Journals (Sweden)
CARVALHO, J. S. C.
2008-12-01
Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.
Uncertainty analysis methods for quantification of source terms using a large computer code
International Nuclear Information System (INIS)
Han, Seok Jung
1997-02-01
Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero
A CMOS Image Sensor With In-Pixel Buried-Channel Source Follower and Optimized Row Selector
Chen, Y.; Wang, X.; Mierop, A.J.; Theuwissen, A.J.P.
2009-01-01
This paper presents a CMOS imager sensor with pinned-photodiode 4T active pixels which use in-pixel buried-channel source followers (SFs) and optimized row selectors. The test sensor has been fabricated in a 0.18-mum CMOS process. The sensor characterization was carried out successfully, and the
Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.
2016-01-01
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460
Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S
2016-03-08
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.
DEFF Research Database (Denmark)
Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger
2005-01-01
in realism and source width. Auralizations were made using three different types of musical instruments: woodwinds (flute), brass (trombone) and strings (violin). Subjects were asked to rate each musical track on a seven-point scale for the degree of realism and source width. An analysis of variance (ANOVA......) was carried out to determine the differences between the number of channels and the effect of instrument. A second test was conducted to assess the degree of difficulty in detecting source orientation (facing the audience or facing the stage wall) depending on the number of channels (one, four or thirteen......) and the amount of absorption in the room. [Work supported by the National Science Foundation.]...
Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe
2007-02-01
We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.
Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe
2007-02-01
We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.
Kim, Do-Bin; Kwon, Dae Woong; Kim, Seunghyun; Lee, Sang-Ho; Park, Byung-Gook
2018-02-01
To obtain high channel boosting potential and reduce a program disturbance in channel stacked NAND flash memory with layer selection by multilevel (LSM) operation, a new program scheme using boosted common source line (CSL) is proposed. The proposed scheme can be achieved by applying proper bias to each layer through its own CSL. Technology computer-aided design (TCAD) simulations are performed to verify the validity of the new method in LSM. Through TCAD simulation, it is revealed that the program disturbance characteristics is effectively improved by the proposed scheme.
DEFF Research Database (Denmark)
Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia
2018-01-01
As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...
International Nuclear Information System (INIS)
Schwinkendorf, K.N.
1996-01-01
A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2
3D reconstruction of the source and scale of buried young flood channels on Mars.
Morgan, Gareth A; Campbell, Bruce A; Carter, Lynn M; Plaut, Jeffrey J; Phillips, Roger J
2013-05-03
Outflow channels on Mars are interpreted as the product of gigantic floods due to the catastrophic eruption of groundwater that may also have initiated episodes of climate change. Marte Vallis, the largest of the young martian outflow channels (Mars hydrologic activity during a period otherwise considered to be cold and dry. Using data from the Shallow Radar sounder on the Mars Reconnaissance Orbiter, we present a three-dimensional (3D) reconstruction of buried channels on Mars and provide estimates of paleohydrologic parameters. Our work shows that Cerberus Fossae provided the waters that carved Marte Vallis, and it extended an additional 180 kilometers to the east before the emplacement of the younger lava flows. We identified two stages of channel incision and determined that channel depths were more than twice those of previous estimates.
International Nuclear Information System (INIS)
Liu Jinzhong; Han Zhanwen; Zhang Fenghui; Zhang Yu
2010-01-01
Close double white dwarfs (CDWDs) are believed to dominate the Galactic gravitational wave (GW) radiation in the frequency range 10 -4 to 0.1 Hz, which will be detected by the Laser Interferometer Space Antenna (LISA) detector. The aim of this detector is to detect GW radiation from astrophysical sources in the universe and to help improve our understanding of the origin of the sources and their physical properties (masses and orbital periods). In this paper, we study the probable candidate sources in the Galaxy for the LISA detector: CDWDs. We use the binary population synthesis approach of CDWDs together with the latest findings of the synthesis models from Han, who proposed three evolutionary channels: (1) stable Roche lobe overflow plus common envelope (RLOF+CE), (2) CE+CE, and (3) exposed core plus CE. As a result, we systematically investigate the detailed physical properties (the distributions of masses, orbital periods, and chirp masses) of the CDWD sources for the LISA detector, examine the importance of the three evolutionary channels for the formation of CDWDs, and carry out Monte Carlo simulations. Our results show that RLOF+CE and CE+CE are the main evolutionary scenarios leading to the formation of CDWDs. For the LISA detectable sources, we also explore and discuss the importance of these three evolutionary channels. Using the calculated birth rate, we compare our results to the LISA sensitivity curve and the foreground noise floor of CDWDs. We find that our estimate for the number of CDWD sources that can be detected by the LISA detector is greater than 10,000. We also find that the detectable CDWDs are produced via the CE+CE channel and we analyze the fraction of the detectable CDWDs that are double helium (He+He), or carbon-oxygen plus helium (CO+He) WD binary systems.
A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry
Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb
2013-01-01
A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…
VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres
Energy Technology Data Exchange (ETDEWEB)
Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)
2017-02-01
We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.
Directory of Open Access Journals (Sweden)
Ahmed W. Mustava
2013-04-01
Full Text Available The effect of a semi-circular cylinders in a two dimensional channel on heat transfer by forced convection from two heat sources with a constant temperature has been studied numerically. Each channel contains two heat sources; one on the upper surface of the channel and the other on the lower surface of the channel. There is semi-circular cylinder under the source in upper surface and there is semi-circular cylinder above the source in lower surface. The location of the second heat source with its semi-cylinder has been changed and keeps the first source with its semi- cylinder at the same location. The flow and temperature field are studied numerically with different values of Reynolds numbers and for different spacing between the centers of the semi-cylinders. The laminar flow field is analyzed numerically by solving the steady forms of the two-dimensional incompressible Navier- Stokes and energy equations. The Cartesian velocity components and pressure on a collocated (non-staggered grid are used as dependent variables in the momentum equations, which discretized by finite volume method, body fitted coordinates are used to represent the complex channel geometry accurately, and grid generation technique based on elliptic partial differential equations is employed. SIMPLE algorithm is used to adjust the velocity field to satisfy the conservation of mass. The range of Reynolds number is (Re= 100 – 800 and the range of the spacing between the semi-cylinders is(1-4 and the Prandtl number is 0.7.The results showed that increasing the spacing between the semi-cylinders increases the average of Nusselt number of the first heat source for all Reynolds numbers. As well as the results show that the best case among the cases studied to enhance the heat transfer is when the second heat source and its semi-cylinder located on at the distance (S=1.5 from the first half of the cylinder and the Reynolds number is greater than (Re ≥ 400 because of the
International Nuclear Information System (INIS)
2005-01-01
In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The
International Nuclear Information System (INIS)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.
1986-11-01
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C
Energy Technology Data Exchange (ETDEWEB)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.
1986-11-01
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.
Directory of Open Access Journals (Sweden)
Itamar Iliuk
2016-01-01
Full Text Available Thermal-hydraulic analysis of plate-type fuel has great importance to the establishment of safety criteria, also to the licensing of the future nuclear reactor with the objective of propelling the Brazilian nuclear submarine. In this work, an analysis of a single plate-type fuel surrounding by two water channels was performed using the RELAP5 thermal-hydraulic code. To realize the simulations, a plate-type fuel with the meat of uranium dioxide sandwiched between two Zircaloy-4 plates was proposed. A partial loss of flow accident was simulated to show the behavior of the model under this type of accident. The results show that the critical heat flux was detected in the central region along the axial direction of the plate when the right water channel was blocked.
Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels
Directory of Open Access Journals (Sweden)
Yun Mao
2012-01-01
Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.
Itamar Iliuk; José Manoel Balthazar; Ângelo Marcelo Tusset; José Roberto Castilho Piqueira
2016-01-01
Thermal-hydraulic analysis of plate-type fuel has great importance to the establishment of safety criteria, also to the licensing of the future nuclear reactor with the objective of propelling the Brazilian nuclear submarine. In this work, an analysis of a single plate-type fuel surrounding by two water channels was performed using the RELAP5 thermal-hydraulic code. To realize the simulations, a plate-type fuel with the meat of uranium dioxide sandwiched between two Zircaloy-4 plates was prop...
Zhou, Xiaolin; Zheng, Xiaowei; Zhang, Rong; Hanzo, Lajos
2013-07-01
In this paper, we design a novel Poisson photon-counting based iterative successive interference cancellation (SIC) scheme for transmission over free-space optical (FSO) channels in the presence of both multiple access interference (MAI) as well as Gamma-Gamma atmospheric turbulence fading, shot-noise and background light. Our simulation results demonstrate that the proposed scheme exhibits a strong MAI suppression capability. Importantly, an order of magnitude of BER improvements may be achieved compared to the conventional chip-level optical code-division multiple-access (OCDMA) photon-counting detector.
International Nuclear Information System (INIS)
Avramova, M.; Ivanov, K.; Arenas, C.
2013-01-01
The principles that support the risk-informed regulation are to be considered in an integrated decision-making process. Thus, any evaluation of licensing issues supported by a safety analysis would take into account both deterministic and probabilistic aspects of the problem. The deterministic aspects will be addressed using Best Estimate code calculations and considering the associated uncertainties i.e. Plus Uncertainty (BEPU) calculations. In recent years there has been an increasing demand from nuclear research, industry, safety and regulation for best estimate predictions to be provided with their confidence bounds. This applies also to the sub-channel thermal-hydraulic codes, which are used to evaluate local safety parameters. The paper discusses the extension of BEPU methods to the sub-channel thermal-hydraulic codes on the example of the Pennsylvania State University (PSU) version of COBRA-TF (CTF). The use of coupled codes supplemented with uncertainty analysis allows to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1000, EPR-1600 and etc.). The paper presents the application of Generalized Perturbation Theory (GPT) to generate uncertainties associated with the few-group assembly homogenized neutron cross-section data used as input in coupled reactor core calculations. This is followed by a discussion of uncertainty propagation methodologies, being implemented by PSU in cooperation of Technical University of Catalonia (UPC) for reactor core calculations and for comprehensive multi-physics simulations. (authors)
Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes
Energy Technology Data Exchange (ETDEWEB)
Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)
2007-03-15
Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.
Directory of Open Access Journals (Sweden)
Eva Stopková
2016-12-01
Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.
International Nuclear Information System (INIS)
Suen, C.J.; Sullivan, T.M.
1990-01-01
This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab
International Nuclear Information System (INIS)
Gaufridy de Dortan, F. de
2006-01-01
Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)
International Nuclear Information System (INIS)
Khelifi, R.; Idiri, Z.; Bode, P.
2002-01-01
The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA
International Nuclear Information System (INIS)
Maddison, G.P.; Reiter, D.
1994-02-01
Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Jeong, J. J.; Chung, B. D.; Lee, W.J
2005-02-01
The subchannel analysis capability of the MARS 3D module has been improved. Especially, the turbulent mixing and void drift models for flow mixing phenomena in rod bundles have been assessed using some well-known rod bundle test data. Then, the subchannel analysis feature was combined to the existing coupled 'system Thermal-Hydraulics (T/H) and 3D reactor kinetics' calculation capability of MARS. These features allow the coupled 'system T/H, 3D reactor kinetics, and hot channel' analysis capability and, thus, realistic simulations of hot channel behavior as well as global system T/H behavior. In this report, the MARS code features for the coupled analysis capability are described first. The code modifications relevant to the features are also given. Then, a coupled analysis of the Main Steam Line Break (MSLB) is carried out for demonstration. The results of the coupled calculations are very reasonable and realistic, and show these methods can be used to reduce the over-conservatism in the conventional safety analysis.
Neben, Nicole; Lenarz, Thomas; Schuessler, Mark; Harpel, Theo; Buechner, Andreas
2013-05-01
Results for speech recognition in noise tests when using a new research coding strategy designed to introduce the virtual channel effect provided no advantage over MP3(000™). Although statistically significant smaller just noticeable differences (JNDs) were obtained, the findings for pitch ranking proved to have little clinical impact. The aim of this study was to explore whether modifications to MP3000 by including sequential virtual channel stimulation would lead to further improvements in hearing, particularly for speech recognition in background noise and in competing-talker conditions, and to compare results for pitch perception and melody recognition, as well as informally collect subjective impressions on strategy preference. Nine experienced cochlear implant subjects were recruited for the prospective study. Two variants of the experimental strategy were compared to MP3000. The study design was a single-blinded ABCCBA cross-over trial paradigm with 3 weeks of take-home experience for each user condition. Comparing results of pitch-ranking, a significantly reduced JND was identified. No significant effect of coding strategy on speech understanding in noise or competing-talker materials was found. Melody recognition skills were the same under all user conditions.
Energy Technology Data Exchange (ETDEWEB)
Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)
2012-08-15
This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.
DEFF Research Database (Denmark)
Berger, Michael Stübert; Soler, José; Yu, Hao
2013-01-01
The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...
Strained Si channel NMOSFETs using a stress field with Si1-yC y source and drain stressors
International Nuclear Information System (INIS)
Chang, S.T.; Tasi, H.-S.; Kung, C.Y.
2006-01-01
The strain field in the silicon channel of a metal-oxide-semiconductor transistor with silicon-carbon alloy source and drain stressors was evaluated using the commercial process simulator FLOOPS-ISE TM . The physical origin of the strain components in the transistor channel region was explained. The magnitude and distribution of the strain components, and their dependence on device design parameters such as the spacing L G between the silicon-carbon alloy stressors, the carbon mole fraction in the stressors and stressor depth were investigated. Reducing the stressor spacing L G or increasing the carbon mole fraction in the stressors and stressor depth increases the magnitude of the vertical compressive stress and the lateral tensile stress in the portion of the N channel region where the inversion charge resides. This is beneficial for improving the electron mobility in n-channel metal-oxide-semiconductor transistors. A simple guiding principle for an optimum combination of the above-mentioned device design parameters in terms of mobility enhancement, drain current enhancement and the tradeoff consideration for junction leakage current degradation
Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code
Energy Technology Data Exchange (ETDEWEB)
Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2015-10-15
Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)
Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos
Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.
Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.
Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX
Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D
2013-01-01
At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...
International Nuclear Information System (INIS)
Chaudri, Khurrum Saleem; Su Yali; Chen Ronghua; Tian Wenxi; Su Guanghui; Qiu Suizheng
2012-01-01
Highlights: ► A tool is developed for coupled neutronics/thermal-hydraulic analysis for SCWR. ► For thermal hydraulic analysis, a sub-channel code SACoS is developed and verified. ► Coupled analysis agree quite well with the reference calculations. ► Different choice of important parameters makes huge difference in design calculations. - Abstract: Supercritical Water Reactor (SCWR) is one of the promising reactors from the list of fourth generation of nuclear reactors. High thermal efficiency and low cost of electricity make it an attractive option in the era of growing energy demand. An almost seven fold density variation for coolant/moderator along the active height does not allow the use of constant density assumption for design calculations, as used for previous generations of reactors. The advancement in computer technology gives us the superior option of performing coupled analysis. Thermal hydraulics calculations of supercritical water systems present extra challenges as not many computational tools are available to perform that job. This paper introduces a new sub-channel code called Sub-channel Analysis Code of SCWR (SACoS) and its application in coupled analyses of High Performance Light Water Reactor (HPLWR). SACoS can compute the basic thermal hydraulic parameters needed for design studies of a supercritical water reactor. Multiple heat transfer and pressure drop correlations are incorporated in the code according to the flow regime. It has the additional capability of calculating the thermal hydraulic parameters of moderator flowing in water box and between fuel assemblies under co-current or counter current flow conditions. Using MCNP4c and SACoS, a coupled system has been developed for SCWR design analyses. The developed coupled system is verified by performing and comparing HPLWR calculations. The results were found to be in very good agreement. Significant difference between the results was seen when Doppler feedback effect was included in
Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.
2000-01-01
The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.
International Nuclear Information System (INIS)
Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji
2011-01-01
Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)
Limiting precision in differential equation solvers. II Sources of trouble and starting a code
International Nuclear Information System (INIS)
Shampine, L.F.
1978-01-01
The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
SPIDERMAN: an open-source code to model phase curves and secondary eclipses
Louden, Tom; Kreidberg, Laura
2018-03-01
We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.
SPIDERMAN: an open-source code to model phase curves and secondary eclipses
Louden, Tom; Kreidberg, Laura
2018-06-01
We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.
R and D Toward a Compact High-Brilliance X-Ray Source Based on Channeling Radiation
International Nuclear Information System (INIS)
Piot, P.; Brau, C.A.; Gabella, W.E.; Choi, B.K.; Jarvis, J.D.; Mendenhall, M.H.; Lewellen, J.W.; Mihalcea, D.
2012-01-01
X-rays have been valuable to a large number of fields including Science, Medicine, and Security. Yet, the availability of a compact high-spectral brilliance X-ray sources is limited. A technique to produce X-rays with spectral brilliance B ∼ 10 12 photons.(mm-mrd) -2 .(0.1% BW) -1 .s -1 is discussed. The method is based on the generation and acceleration of a low-emittance field-emitted electron bunches. The bunches are then focused on a diamond crystal thereby producing channeling radiation. In this paper, after presenting the overarching concept, we discuss the generation, acceleration and transport of the low-emittance bunches with parameters consistent with the production of high-brilliance X-rays through channeling radiation. We especially consider the example of the Advanced Superconducting Test Accelerator (ASTA) currently in construction at Fermilab where a proof-of-principle experiment is in preparation.
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code
International Nuclear Information System (INIS)
Bubelis, Evaldas; Coddington, Paul; Leung, Waihung
2006-01-01
A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)
International Nuclear Information System (INIS)
McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.
1980-01-01
The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results
kspectrum: an open-source code for high-resolution molecular absorption spectra production
International Nuclear Information System (INIS)
Eymet, V.; Coustet, C.; Piaud, B.
2016-01-01
We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)
International Nuclear Information System (INIS)
Khattab, K.; Omar, H.; Ghazi, N.
2009-01-01
A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)
van den Boer, Yvon
2014-01-01
In the Netherlands, over a million businesses regularly have to deal with complex matters imposed by the government (e.g., managing tax problems). To solve their problems, businesses have various potential sources to consult (e.g., Tax Office, advisor, friends/family). The myriad sources can be
Directory of Open Access Journals (Sweden)
Sedigheh Sina
2011-06-01
Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources. Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively. By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.
Developing open-source codes for electromagnetic geophysics using industry support
Key, K.
2017-12-01
Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.
Calculation of the effective dose from natural radioactivity sources in soil using MCNP code
International Nuclear Information System (INIS)
Krstic, D.; Nikezic, D.
2008-01-01
Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors
In-vessel source term analysis code TRACER version 2.3. User's manual
International Nuclear Information System (INIS)
Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya
2005-01-01
A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Scheduling for dual-hop block-fading channels with two source-user pairs sharing one relay
Zafar, Ammar
2013-09-01
In this paper, we maximize the achievable rate region of a dual-hop network with two sources serving two users independently through a single shared relay. We formulate the problem as maximizing the sum of the weighted long term average throughputs of the two users under stability constraints on the long term throughputs of the source-user pairs. In order to solve the problem, we propose a joint user-and-hop scheduling scheme, which schedules the first or second hop opportunistically based on instantaneous channel state information, in order to exploit multiuser diversity and multihop diversity gains. Numerical results show that the proposed joint scheduling scheme enhances the achievable rate region as compared to a scheme that employs multi-user scheduling on the second-hop alone. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.
Low power consumption O-band VCSEL sources for upstream channels in PON systems
DEFF Research Database (Denmark)
Vegas Olmos, Juan José; Rodes Lopez, Roberto; Tafur Monroy, Idelfonso
2012-01-01
This paper presents an experimental validation of a low power optical network unit employing vertical-cavity surface-emitting lasers as upstream sources for passive optical networks with an increased power budget, enabling even larger splitting ratios....
International Nuclear Information System (INIS)
Kalmykov, S Y; Shadwick, B A; Davoine, X; Ghebregziabher, I; Lehe, R; Lifschitz, A F
2016-01-01
Propagating a relativistically intense, negatively chirped laser pulse (the bandwidth >150 nm) in a plasma channel makes it possible to generate background-free, comb-like electron beams—sequences of synchronized bunches with a low phase-space volume and controlled energy spacing. The tail of the pulse, confined in the accelerator cavity (an electron density ‘bubble’), experiences periodic focusing, while the head, which is the most intense portion of the pulse, steadily self-guides. Oscillations of the cavity size cause periodic injection of electrons from the ambient plasma, creating an electron energy comb with the number of components, their mean energy, and energy spacing dependent on the channel radius and pulse length. These customizable electron beams enable the design of a tunable, all-optical source of pulsed, polychromatic γ-rays using the mechanism of inverse Thomson scattering, with up to ∼10 −5 conversion efficiency from the drive pulse in the electron accelerator to the γ-ray beam. Such a source may radiate ∼10 7 quasi-monochromatic photons per shot into a microsteradian-scale cone. The photon energy is distributed among several distinct bands, each having sub-30% energy spread, with a highest energy of 12.5 MeV. (paper)
PyFLOWGO: An open-source platform for simulation of channelized lava thermo-rheological properties
Chevrel, Magdalena Oryaëlle; Labroquère, Jérémie; Harris, Andrew J. L.; Rowland, Scott K.
2018-02-01
Lava flow advance can be modeled through tracking the evolution of the thermo-rheological properties of a control volume of lava as it cools and crystallizes. An example of such a model was conceived by Harris and Rowland (2001) who developed a 1-D model, FLOWGO, in which the velocity of a control volume flowing down a channel depends on rheological properties computed following the thermal path estimated via a heat balance box model. We provide here an updated version of FLOWGO written in Python that is an open-source, modern and flexible language. Our software, named PyFLOWGO, allows selection of heat fluxes and rheological models of the user's choice to simulate the thermo-rheological evolution of the lava control volume. We describe its architecture which offers more flexibility while reducing the risk of making error when changing models in comparison to the previous FLOWGO version. Three cases are tested using actual data from channel-fed lava flow systems and results are discussed in terms of model validation and convergence. PyFLOWGO is open-source and packaged in a Python library to be imported and reused in any Python program (https://github.com/pyflowgo/pyflowgo)
International Nuclear Information System (INIS)
Marino, Edgardo J.L.
1999-01-01
Using the input data language of ICARE2 V2 Mod.3 code, the fuel element and coolant channel assembly of CNA I type was described. This input data was utilized to analyze the system behavior and determine the degradation produced during a hypothetical accidental transient at CNA I. The boundary conditions were determined through a previous calculation with RELAP5/MOD 3.2 code. The results had shown characteristic degradation phenomena's. The temperature of bundle components increases fast after 6.11 h in the first case and 5.28 h in the second case, due to the energy release by cladding oxidation. It was correlated with instantaneous hydrogen production and energy contribution. The cumulated hydrogen production was estimated as 0.15 Kg in the first case and ∼ 5 times greater in the second case. Fission product release from the gap due to cladding rupture took place from 6.25 h in the first case and 5.65 h in the second. Relocation started after 6.81 h in the first case and 5.68 in the second, because the cladding dislocation condition is reached. UO 2 dissolution by molten Zircaloy was observed at different levels in the calculation domain. (author)
Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.
2016-10-01
Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.
Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.
2017-11-01
A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.
Bug-Fixing and Code-Writing: The Private Provision of Open Source Software
DEFF Research Database (Denmark)
Bitzer, Jürgen; Schröder, Philipp
2002-01-01
Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...
SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes
International Nuclear Information System (INIS)
Dunford, Charles L.
2002-01-01
Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13
ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS
Directory of Open Access Journals (Sweden)
A. V. Semenets
2016-08-01
The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library
The action of physical power sources on the membranes are containing of ionic channels
International Nuclear Information System (INIS)
Qasimov, X.M.; Qurbanov, O.Q.
2002-01-01
The biological membranes are the primary target at the action different kinds of irradiation, such as ionizing and ultra-violet (UV), which one result in damage of membrane structure and full loss of their biological functions. It is necessary to mark, that the molecular mechanism the action of radioactive and UV-irradiation on the transport processes which are flowing past in biological membranes, in particularly, on native Na + , K + , Ca ++ channels of muscle membranes remain till now obscure. It is supposed, that the function of transport systems can change under the action of an ionizing radiation by a direct action on a lipid matrix of membranes, inducing in them peroxide oxidation of lipids. As a test - system was used a bilayer lipid membranes with including in their modifying channel forming compounds with known chemical structure. The conductance of lipid membranes can be increase or decrease in the depending on doses of acting irradiation on the membranes with modifying agent. It was showed at the presence of a carrier cations of valinomycin the conductance of membranes at acting ionizing irradiation is inactivated. The observed inactivation can be connected to chemical transformation of free radicals, resulting a radiolysis of water. After achievement of fixed membrane conductance were irradiated with ionizing radiation and UV - irradiation. At the action an ionizing radiation in a dose 40 kV in during of time 1 min on membranes by the area 0,5 cm 2 , are containing cation selective of ion channels formed in lipid bilayers in the presence of levorin is studied. The membranes was formed in solution of 10 m M KCl at ph = 7,0. The concentration of levorin in solution made value 0,5 μgr/ml. Before the action of irradiation the conductance of membranes with modifying agent was peer 10 -3 -10 -2 ohm -1 ·cm -2 . For registration of integral conductance of the membranes on a membrane was applied a potential by value 100 mv. It was discovered, that under the
Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.
2018-05-01
One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.
Stephens, Keri K.; Barrett, Ashley K.; Mahometa, Michael J.
2013-01-01
This study relies on information theory, social presence, and source credibility to uncover what best helps people grasp the urgency of an emergency. We surveyed a random sample of 1,318 organizational members who received multiple notifications about a large-scale emergency. We found that people who received 3 redundant messages coming through at…
An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization
International Nuclear Information System (INIS)
Vieira, W.J.; Mendonca, A.G.
1985-01-01
In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt
Saramekala, G. K.; Santra, Abirmoya; Dubey, Sarvesh; Jit, Satyabrata; Tiwari, Pramod Kumar
2013-08-01
In this paper, an analytical short-channel threshold voltage model is presented for a dual-metal-gate (DMG) fully depleted recessed source/drain (Re-S/D) SOI MOSFET. For the first time, the advantages of recessed source/drain (Re-S/D) and of dual-metal-gate structure are incorporated simultaneously in a fully depleted SOI MOSFET. The analytical surface potential model at Si-channel/SiO2 interface and Si-channel/buried-oxide (BOX) interface have been developed by solving the 2-D Poisson’s equation in the channel region with appropriate boundary conditions assuming parabolic potential profile in the transverse direction of the channel. Thereupon, a threshold voltage model is derived from the minimum surface potential in the channel. The developed model is analyzed extensively for a variety of device parameters like the oxide and silicon channel thicknesses, thickness of source/drain extension in the BOX, control and screen gate length ratio. The validity of the present 2D analytical model is verified with ATLAS™, a 2D device simulator from SILVACO Inc.
2016-02-15
series of sunny days interrupted only rarely by rain–a pattern now all too familiar to residents. Analogously, a one- dimensional spin system in a...computation of Cq(L) that is independent of the diverging embedding dimension . Another source of difficulty is the exponentially increasing number of words...alternatives. For example, more general quantum hidden Markov models (QHMMs) may yield a greater advantage3. Proving minimality among QHMMs is of great
Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide
International Nuclear Information System (INIS)
March-Leuba, J.
1995-08-01
A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions
Basic design of the HANARO cold neutron source using MCNP code
International Nuclear Information System (INIS)
Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil
2005-01-01
The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described
Locating a compact odor source using a four-channel insect electroantennogram sensor
Energy Technology Data Exchange (ETDEWEB)
Myrick, A J; Baker, T C [Chemical Ecology Laboratory, Department of Entomology, Pennsylvania State University, University Park, PA 16802 (United States)
2011-03-15
Here we demonstrate the feasibility of using an array of live insects to detect concentrated packets of odor and infer the location of an odor source ({approx}15 m away) using a backward Lagrangian dispersion model based on the Langevin equation. Bayesian inference allows uncertainty to be quantified, which is useful for robotic planning. The electroantennogram (EAG) is the biopotential developed between the tissue at the tip of an insect antenna and its base, which is due to the massed response of the olfactory receptor neurons to an odor stimulus. The EAG signal can carry tens of bits per second of information with a rise time as short as 12 ms (K A Justice 2005 J. Neurophiol. 93 2233-9). Here, instrumentation including a GPS with a digital compass and an ultrasonic 2D anemometer has been integrated with an EAG odor detection scheme, allowing the location of an odor source to be estimated by collecting data at several downwind locations. Bayesian inference in conjunction with a Lagrangian dispersion model, taking into account detection errors, has been implemented resulting in an estimate of the odor source location within 0.2 m of the actual location.
Locating a compact odor source using a four-channel insect electroantennogram sensor
International Nuclear Information System (INIS)
Myrick, A J; Baker, T C
2011-01-01
Here we demonstrate the feasibility of using an array of live insects to detect concentrated packets of odor and infer the location of an odor source (∼15 m away) using a backward Lagrangian dispersion model based on the Langevin equation. Bayesian inference allows uncertainty to be quantified, which is useful for robotic planning. The electroantennogram (EAG) is the biopotential developed between the tissue at the tip of an insect antenna and its base, which is due to the massed response of the olfactory receptor neurons to an odor stimulus. The EAG signal can carry tens of bits per second of information with a rise time as short as 12 ms (K A Justice 2005 J. Neurophiol. 93 2233-9). Here, instrumentation including a GPS with a digital compass and an ultrasonic 2D anemometer has been integrated with an EAG odor detection scheme, allowing the location of an odor source to be estimated by collecting data at several downwind locations. Bayesian inference in conjunction with a Lagrangian dispersion model, taking into account detection errors, has been implemented resulting in an estimate of the odor source location within 0.2 m of the actual location.
Zhang, Melvyn W B; Ho, Roger C M
2017-01-01
Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.
Self characterization of a coded aperture array for neutron source imaging
Energy Technology Data Exchange (ETDEWEB)
Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)
2014-12-15
The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.
Bae, Tae-Eon; Wakabayashi, Yuki; Nakane, Ryosho; Takenaka, Mitsuru; Takagi, Shinichi
2018-04-01
Improvement in the performance of Ge-source/Si-channel heterojunction tunneling FETs (TFETs) with high on-current/off-current (I on/I off) ratio and steep subthreshold swing (SS) is demonstrated. In this paper, we experimentally examine the effects of gas ambient [N2 and forming gas (4% H2/N2)] and a doping concentration in the drain regions on the electrical characteristics of Ge/Si heterojunction TFETs. The minimum SS (SSmin) of 70.9 mV/dec and the large I on/I off ratio of 1.4 × 107 are realized by postmetallization annealing in forming gas. Also, the steep SSmin and averaged SS (SSavr) values of 64.2 and 78.4 mV/dec, respectively, are obtained in low drain doping concentration. This improvement is attributable to the reduction in interface state density (D it) in the channel region and to the low leakage current in the drain region.
R and D toward a compact high-brilliance X-ray source based on channeling radiation
Energy Technology Data Exchange (ETDEWEB)
Piot, P.; Brau, C. A.; Gabella, W. E.; Choi, B. K.; Jarvis, J. D.; Lewellen, J. W.; Mendenhall, M. H.; Mihalcea, D. [Northern Illinois Center for Accelerator and Detector Development and Department of Physics, Northern Illinois University, DeKalb, IL 60115 (United States) and Accelerator Physics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235 (United States) and Vanderbilt Institute of Nanoscale Science and Engineering, Vanderbilt University, Nashville, TN 37235 (United States); Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Physics Department and Combat Systems, Naval Postgraduate School, Monterey, CA 93943 (United States); Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Northern Illinois Center for Accelerator and Detector Development and Department of Physics, Northern Illinois University, DeKalb, IL 60115 (United States)
2012-12-21
X-rays have been valuable to a large number of fields including Science, Medicine, and Security. Yet, the availability of a compact high-spectral brilliance X-ray sources is limited. A technique to produce X-rays with spectral brilliance B{approx} 10{sup 12} photons.(mm-mrd){sup -2}. (0.1% BW){sup -1}.s{sup -1} is discussed. The method is based on the generation and acceleration of a low-emittance field-emitted electron bunches. The bunches are then focused on a diamond crystal thereby producing channeling radiation. In this paper, after presenting the overarching concept, we discuss the generation, acceleration and transport of the low-emittance bunches with parameters consistent with the production of high-brilliance X-rays through channeling radiation. We especially consider the example of the Advanced Superconducting Test Accelerator (ASTA) currently in construction at Fermilab where a proof-of-principle experiment is in preparation.
Indian Academy of Sciences (India)
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
Energy Technology Data Exchange (ETDEWEB)
Florio, L.A.; Harnoy, A. [Department of Mechanical Engineering, New Jersey Institute of Technology, University Heights, Newark, NJ 07102 (United States)
2007-09-15
A numerical investigation was conducted into an alternative method of natural convection enhancement by the transverse oscillations of a thin short plate, strategically positioned in close proximity to a rectangular heat source. The heat source is attached to a mounting board in a vertical channel. Two-dimensional laminar flow finite element studies were carried out with the oscillation parameters, the oscillating plate-heat source mean clearance spacing, and the oscillating plate position varied. Significant cooling was found for displacement amplitudes of at least one-third of the mean clearance together with frequencies (Re/{radical}(Gr)) of over 2{pi} with the displacement being more critical to the cooling level. For the parameters investigated, up to a 52% increase in the local heat transfer coefficient relative to standard natural convection was obtained. The results indicate that this method can serve as a feasible, simpler, more energy and space efficient alternative to common methods of cooling for low power dissipating devices operating at conditions just beyond the reach of pure natural convection. (author)
Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes
International Nuclear Information System (INIS)
Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do
2014-01-01
The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM
Entropy of a bit-shift channel
Baggen, Stan; Balakirsky, Vladimir; Denteneer, Dee; Egner, Sebastian; Hollmann, Henk; Tolhuizen, Ludo; Verbitskiy, Evgeny
2006-01-01
We consider a simple transformation (coding) of an iid source called a bit-shift channel. This simple transformation occurs naturally in magnetic or optical data storage. The resulting process is not Markov of any order. We discuss methods of computing the entropy of the transformed process, and
The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons
CERN. Geneva
2018-01-01
In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...
The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons
CERN. Geneva
2018-01-01
In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.
International Nuclear Information System (INIS)
Harben, P.E.; Boro, C.; Dorman, L.; Pulli, J.
2000-05-01
The hydroacoustic nuclear explosion monitoring regime, like its counterpart in seismic monitoring, requires ground truth calibration. Model predictions of traveltimes, blockages, reflections, diffractions, and waveform envelopes need to be verified with ground truth experiments, particularly in the high latitudes where models often fail. Although pressure detonated explosives are a simple, reliable, and flexible method to generate an impulsive hydroacoustic calibration source at a desired depth; safety procedures, specialized training, and local regulations often preclude their use. This leaves few alternatives since airgun and other seismic marine sources are designed for use only at shallow depths and hence do not effectively couple into the SOFAR channel, a necessary requirement for long range propagation. Imploding spheres could be an effective source at mid-ocean depths and below but development of a method to reliably break such spheres has been elusive. We designed and tested a prototype system to initiate catastrophic glass sphere failure at a prescribed depth. The system firmly holds a glass sphere in contact with a piston-ram assembly. The end cap on the cylinder confining the piston and opposing the ram has a rupture disk sealed to it. The rupture disk is calibrated to fail within 5% of the calibrated failure pressure, 1000 psi in our tests. Failure of the rupture disk results in a sudden inrush of high pressure water into the air-filled piston chamber, driving the piston - and attached ram - towards the glass sphere. The spherecracker was first tested on Benthos Corp. flotation spheres. The spherecracker mechanism successfully punched a hole in the Benthos sphere at the nominal pressure of 1000 psi or at about 700 meters depth in each of four tests. Despite the violent inrush of high pressure water the spheres did not otherwise fail. We concluded that the Benthos spheres were too thick-walled to be used as an imploding source at nominal SOFAR channel
International Nuclear Information System (INIS)
Kress, T.S.
1985-04-01
The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time
Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim
2014-01-01
In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer's G functions. Moreover, we present asymptotic expressions by utilizing the Meijer's G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.
Zedini, Emna
2014-07-16
In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Asp, Nils Edvin; Gomes, Vando José Costa; Ogston, Andrea; Borges, José Carlos Corrêa; Nittrouer, Charles Albert
2016-02-01
The tide-dominated eastern sector of the Brazilian Amazonian coast includes large mangrove areas and several estuaries, including the estuary associated with the Urumajó River. There, the dynamics of suspended sediments and delivery mechanisms for mud to the tidal flats and mangroves are complex and were investigated in this study. Four longitudinal measuring campaigns were carried out, encompassing spring/neap tides and dry/rainy seasons. During spring tides, water levels were measured simultaneously at 5 points along the estuary. Currents, salinity, and suspended sediment concentrations (SSCs) were measured over the tidal cycle in a cross section at the middle sector of the estuary. Results show a marked turbidity maximum zone (TMZ) during the rainy season, with a 4-km upstream displacement from neap to spring tide. During dry season, the TMZ was conspicuous only during neap tide and dislocated about 5 km upstream and was substantially less apparent in comparison to that observed during rainy season. The results show that mud is being concentrated in the channel associated with the TMZ especially during the rainy season. At this time, a substantial amount of the mud is washed out from mangroves to the estuarine channel and hydrodynamic/salinity conditions for TMZ formation are optimal. As expected, transport to the mangrove flats is most effective during spring tide and substantially reduced at neap tide, when mangroves are not being flooded. During the dry season, mud is resuspended from the bed in the TMZ sector and is a source of sediment delivered to the tidal flats and mangroves. The seasonal variation of the sediments on the seabed is in agreement with the variation of suspended sediments as well.
Study of the interference of plumes released from two near-ground point sources in an open channel
International Nuclear Information System (INIS)
Oskouie, Shahin N.; Wang, Bing-Chen; Yee, Eugene
2015-01-01
Highlights: • DNS study of turbulent dispersion and mixing of passive scalars. • Interference of two passive plumes in a boundary layer flow. • Cross correlation, co-spectra and coherency spectra of two plumes. - Abstract: The dispersion and mixing of passive scalars released from two near-ground point sources into an open-channel flow are studied using direct numerical simulation. A comparative study based on eight test cases has been conducted to investigate the effects of Reynolds number and source separation distance on the dispersion and interference of the two plumes. In order to determine the nonlinear relationship between the variance of concentration fluctuations of the total plume and those produced by each of the two plumes, the covariance of the two concentration fields is studied in both physical and spectral spaces. The results show that at the source height, the streamwise evolution of the cross correlation between the fluctuating components of the two concentration fields can be classified into four stages, which feature zero, destructive and constructive interferences and a complete mixing state. The characteristics of these four stages of plume mixing are further confirmed through an analysis of the pre-multiplied co-spectra and coherency spectra. From the coherency spectrum, it is observed that there exists a range of ‘leading scales’, which are several times larger than the Kolmogorov scale but are smaller than or comparable to the scale of the most energetic eddies of turbulence. At the leading scales, the mixing between the two interfering plumes is the fastest and the coherency spectrum associated with these scales can quickly approach its asymptotic value of unity.
International Nuclear Information System (INIS)
Appiah-Ofori, F. F.
2014-07-01
The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and
Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin
2017-02-01
We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).
A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes
International Nuclear Information System (INIS)
O'Connor, Evan; Ott, Christian D
2010-01-01
We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.
A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes
Energy Technology Data Exchange (ETDEWEB)
O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)
2010-06-07
We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.
Balanced distributed coding of omnidirectional images
Thirumalai, Vijayaraghavan; Tosic, Ivana; Frossard, Pascal
2008-01-01
This paper presents a distributed coding scheme for the representation of 3D scenes captured by stereo omni-directional cameras. We consider a scenario where images captured from two different viewpoints are encoded independently, with a balanced rate distribution among the different cameras. The distributed coding is built on multiresolution representation and partitioning of the visual information in each camera. The encoder transmits one partition after entropy coding, as well as the syndrome bits resulting from the channel encoding of the other partition. The decoder exploits the intra-view correlation and attempts to reconstruct the source image by combination of the entropy-coded partition and the syndrome information. At the same time, it exploits the inter-view correlation using motion estimation between images from different cameras. Experiments demonstrate that the distributed coding solution performs better than a scheme where images are handled independently, and that the coding rate stays balanced between encoders.
1988-05-12
the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen
Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer
Directory of Open Access Journals (Sweden)
A. V. Sidorin
2015-01-01
Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.
International Nuclear Information System (INIS)
Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil
2013-01-01
The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV
International Nuclear Information System (INIS)
Gara, P.; Martin, E.
1983-01-01
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Directory of Open Access Journals (Sweden)
Hyoung Tae Kim
2016-01-01
Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.
Directory of Open Access Journals (Sweden)
Liangliang Wei
2018-02-01
Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.