WorldWideScience

Sample records for adaptive modulation coding

  1. Adaptive Modulation and Coding for LTE Wireless Communication

    Science.gov (United States)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  2. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  3. Satellite Media Broadcasting with Adaptive Coding and Modulation

    Directory of Open Access Journals (Sweden)

    Georgios Gardikis

    2009-01-01

    Full Text Available Adaptive Coding and Modulation (ACM is a feature incorporated into the DVB-S2 satellite specification, allowing real-time adaptation of transmission parameters according to the link conditions. Although ACM was originally designed for optimizing unicast services, this article discusses the expansion of its usage to broadcasting streams as well. For this purpose, a general cross-layer adaptation approach is proposed, along with its realization into a fully functional experimental network, and test results are presented. Finally, two case studies are analysed, assessing the gain derived by ACM in a real large-scale deployment, involving HD services provision to two different geographical areas.

  4. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  5. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  6. Quadrature amplitude modulation from basics to adaptive trellis-coded turbo-equalised and space-time coded OFDM CDMA and MC-CDMA systems

    CERN Document Server

    Hanzo, Lajos

    2004-01-01

    "Now fully revised and updated, with more than 300 pages of new material, this new edition presents the wide range of recent developments in the field and places particular emphasis on the family of coded modulation aided OFDM and CDMA schemes. In addition, it also includes a fully revised chapter on adaptive modulation and a new chapter characterizing the design trade-offs of adaptive modulation and space-time coding." "In summary, this volume amalgamates a comprehensive textbook with a deep research monograph on the topic of QAM, ensuring it has a wide-ranging appeal for both senior undergraduate and postgraduate students as well as practicing engineers and researchers."--Jacket.

  7. Intrinsic gain modulation and adaptive neural coding.

    Directory of Open Access Journals (Sweden)

    Sungho Hong

    2008-07-01

    Full Text Available In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

  8. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  9. Adaptive Combined Source and Channel Decoding with Modulation ...

    African Journals Online (AJOL)

    In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...

  10. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  11. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  12. UEP Concepts in Modulation and Coding

    Directory of Open Access Journals (Sweden)

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  13. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  14. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  16. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  17. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  18. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  19. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  20. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  1. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  2. Module type plant system dynamics analysis code (MSG-COPD). Code manual

    International Nuclear Information System (INIS)

    Sakai, Takaaki

    2002-11-01

    MSG-COPD is a module type plant system dynamics analysis code which involves a multi-dimensional thermal-hydraulics calculation module to analyze pool type of fast breeder reactors. Explanations of each module and the methods for the input data are described in this code manual. (author)

  3. Coded Modulation in C and MATLAB

    Science.gov (United States)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  4. Uplink capacity of multi-class IEEE 802.16j relay networks with adaptive modulation and coding

    DEFF Research Database (Denmark)

    Wang, Hua; Xiong, C; Iversen, Villy Bæk

    2009-01-01

    The emerging IEEE 802.16j mobile multi-hop relay (MMR) network is currently being developed to increase the user throughput and extend the service coverage as an enhancement of existing 802.16e standard. In 802.16j, the intermediate relay stations (RSs) help the base station (BS) communicate...... with those mobile stations (MSs) that are either too far away from the BS or placed in an area where direct communication with BS experiences unsatisfactory level of service. In this paper, we investigate the uplink Erlang capacity of a two-hop 802.16j relay system supporting both voice and data traffics...... with adaptive modulation and coding (AMC) scheme applied in the physical layer. We first develop analytical models to calculate the blocking probability in the access zone and the outage probability in the relay zone, respectively. Then a joint algorithm is proposed to determine the bandwidth distribution...

  5. Multiuser Diversity with Adaptive Modulation in Non-Identically Distributed Nakagami Fading Environments

    KAUST Repository

    Rao, Anlei

    2012-09-08

    In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel capacity, spectral efficiency, and bit-error-rate (BER) for both constant-power variable-rate and variable-power variable-rate uncoded/coded M-ary quadrature amplitude modulation (M-QAM) schemes. We also study the impact of time delay on the average BER of adaptive M-QAM. Selected numerical results show that the multiuser diversity brings a considerably better performance even over i.n.i.d. fading environments.

  6. Combined Coding And Modulation Using Runlength Limited Error ...

    African Journals Online (AJOL)

    In this paper we propose a Combined Coding and Modulation (CCM) scheme employing RLL/ECCs and MPSK modulation as well as RLL/ECC codes and BFSK/MPSK modulation with a view to optimise on channel bandwidth. The CCM codes and their trellis are designed and their error performances simulated in AWGN ...

  7. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  8. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  9. Attention modulates visual size adaptation.

    Science.gov (United States)

    Kreutzer, Sylvia; Fink, Gereon R; Weidner, Ralph

    2015-01-01

    The current study determined in healthy subjects (n = 16) whether size adaptation occurs at early, i.e., preattentive, levels of processing or whether higher cognitive processes such as attention can modulate the illusion. To investigate this issue, bottom-up stimulation was kept constant across conditions by using a single adaptation display containing both small and large adapter stimuli. Subjects' attention was directed to either the large or small adapter stimulus by means of a luminance detection task. When attention was directed toward the small as compared to the large adapter, the perceived size of the subsequent target was significantly increased. Data suggest that different size adaptation effects can be induced by one and the same stimulus depending on the current allocation of attention. This indicates that size adaptation is subject to attentional modulation. These findings are in line with previous research showing that transient as well as sustained attention modulates visual features, such as contrast sensitivity and spatial frequency, and influences adaptation in other contexts, such as motion adaptation (Alais & Blake, 1999; Lankheet & Verstraten, 1995). Based on a recently suggested model (Pooresmaeili, Arrighi, Biagi, & Morrone, 2013), according to which perceptual adaptation is based on local excitation and inhibition in V1, we conclude that guiding attention can boost these local processes in one or the other direction by increasing the weight of the attended adapter. In sum, perceptual adaptation, although reflected in changes of neural activity at early levels (as shown in the aforementioned study), is nevertheless subject to higher-order modulation.

  10. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  11. The KINA neutronic module of the LEGO code for steady-state and transient PWR plant simulations

    International Nuclear Information System (INIS)

    Nicolopoulos, D.; Pollacchini, L.; Vimercati, G.; Spelta, S.

    1989-01-01

    The Automation Research Center (CRA) of ENEl has implemented some models for analyzing both incidental and operational transients in PWR power plants. For such models an axial neutron kinetics module characterized by high computational efficency with adequate results accuracy was called for. CISE has been entrusted with the task of implementing such a module named KINA and based on IQS (Improved Quasi Static) method, to be included in the library of LEGO modular code used by CRA to set up PWR power models. Moreover, The KINA module has been adapted to the neutron constants computing model developed by the EdF-SEPTEN, which has been using and improving the LEGO code for a long time in cooperation with ENEL-CRA. In this paper, after some remarks on the LEGO code, a general description of KINA neutronic module is given. The resylts of a preliminary validation activity of KINA for an EdF 1300 MWe PWR plant are also presented

  12. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  13. Flexible digital modulation and coding synthesis for satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Hoerig, Craig; Tague, John

    1991-01-01

    An architecture and a hardware prototype of a flexible trellis modem/codec (FTMC) transmitter are presented. The theory of operation is built upon a pragmatic approach to trellis-coded modulation that emphasizes power and spectral efficiency. The system incorporates programmable modulation formats, variations of trellis-coding, digital baseband pulse-shaping, and digital channel precompensation. The modulation formats examined include (uncoded and coded) binary phase shift keying (BPSK), quatenary phase shift keying (QPSK), octal phase shift keying (8PSK), 16-ary quadrature amplitude modulation (16-QAM), and quadrature quadrature phase shift keying (Q squared PSK) at programmable rates up to 20 megabits per second (Mbps). The FTMC is part of the developing test bed to quantify modulation and coding concepts.

  14. Adaptive RAC codes employing statistical channel evaluation ...

    African Journals Online (AJOL)

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  15. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  16. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  17. An experimental comparison of coded modulation strategies for 100 Gb/s transceivers

    NARCIS (Netherlands)

    Sillekens, E.; Alvarado, A.; Okonkwo, C.; Thomsen, B.C.

    2016-01-01

    Coded modulation is a key technique to increase the spectral efficiency of coherent optical communication systems. Two popular strategies for coded modulation are turbo trellis-coded modulation (TTCM) and bit-interleaved coded modulation (BICM) based on low-density parity-check (LDPC) codes.

  18. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  19. Signal Constellations for Multilevel Coded Modulation with Sparse Graph Codes

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    A method to combine error-correction coding and spectral efficient modulation for transmission over channels with Gaussian noise is presented. The method of modulation leads to a signal constellation in which the constellation symbols have a nonuniform distribution. This gives a so-called shape gain

  20. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  1. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  2. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  3. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  4. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  5. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  6. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  7. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  8. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  9. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  10. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to

  11. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  12. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  13. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    Science.gov (United States)

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  14. Rate Adaptive OFDMA Communication Systems

    International Nuclear Information System (INIS)

    Abdelhakim, M.M.M.

    2009-01-01

    Due to the varying nature of the wireless channels, adapting the transmission parameters, such as code rate, modulation order and power, in response to the channel variations provides a significant improvement in the system performance. In the OFDM systems, Per-Frame adaptation (PFA) can be employed where the transmission variables are fixed over a given frame and may change from one frame to the other. Subband (tile) loading offers more degrees of adaptation such that each group of carriers (subband) uses the same transmission parameters and different subbands may use different parameters. Changing the code rate for each tile in the same frame, results in transmitting multiple codewords (MCWs) for a single frame. In this thesis a scheme is proposed for adaptively changing the code rate of coded OFDMA systems via changing the puncturing rate within a single codeword (SCW). In the proposed structure, the data is encoded with the lowest available code rate then it is divided among the different tiles where it is punctured adaptively based on some measure of the channel quality for each tile. The proposed scheme is compared against using multiple codewords (MCWs) where the different code rates for the tiles are obtained using separate encoding processes. For bit interleaved coded modulation architecture two novel interleaving methods are proposed, namely the puncturing dependant interleaver (PDI) and interleaved puncturing (IntP), which provide larger interleaving depth. In the PDI method the coded bits with the same rate over different tiles are grouped for interleaving. In IntP structure the interleaving is performed prior to puncturing. The performance of the adaptive puncturing technique is investigated under constant bit rate constraint and variable bit rate. Two different adaptive modulation and coding (AMC) selection methods are examined for variable bit rate adaptive system. The first is a recursive scheme that operates directly on the SNR whereas the second

  15. Tritium module for ITER/Tiber system code

    International Nuclear Information System (INIS)

    Finn, P.A.; Willms, S.; Busigin, A.; Kalyanam, K.M.

    1988-01-01

    A tritium module was developed for the ITER/Tiber system code to provide information on capital costs, tritium inventory, power requirements and building volumes for these systems. In the tritium module, the main tritium subsystems/emdash/plasma processing, atmospheric cleanup, water cleanup, blanket processing/emdash/are each represented by simple scaleable algorithms. 6 refs., 2 tabs

  16. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    Science.gov (United States)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  17. Differential Space-Time Block Code Modulation for DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Liu Jianhua

    2002-01-01

    Full Text Available A differential space-time block code (DSTBC modulation scheme is used to improve the performance of DS-CDMA systems in fast time-dispersive fading channels. The resulting scheme is referred to as the differential space-time block code modulation for DS-CDMA (DSTBC-CDMA systems. The new modulation and demodulation schemes are especially studied for the down-link transmission of DS-CDMA systems. We present three demodulation schemes, referred to as the differential space-time block code Rake (D-Rake receiver, differential space-time block code deterministic (D-Det receiver, and differential space-time block code deterministic de-prefix (D-Det-DP receiver, respectively. The D-Det receiver exploits the known information of the spreading sequences and their delayed paths deterministically besides the Rake type combination; consequently, it can outperform the D-Rake receiver, which employs the Rake type combination only. The D-Det-DP receiver avoids the effect of intersymbol interference and hence can offer better performance than the D-Det receiver.

  18. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  19. Coding/modulation trade-offs for Shuttle wideband data links

    Science.gov (United States)

    Batson, B. H.; Huth, G. K.; Trumpis, B. D.

    1974-01-01

    This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.

  20. A multiobjective approach to the genetic code adaptability problem.

    Science.gov (United States)

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  1. Implementation of CFD module in the KORSAR thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Yudov, Yury V.; Danilov, Ilia G.; Chepilko, Stepan S. [Alexandrov Research Inst. of Technology (NITI), Sosnovy Bor (Russian Federation)

    2015-09-15

    The Russian KORSAR/GP (hereinafter KORSAR) computer code was developed by a joint team from Alexandrov NITI and OKB ''Gidropress'' for VVER safety analysis and certified by the Rostechnadzor of Russia in 2009. The code functionality is based on a 1D two-fluid model for calculation of two-phase flows. A 3D CFD module in the KORSAR computer code is being developed by Alexandrov NITI for representing 3D effects in the downcomer and lower plenum during asymmetrical loop operation. The CFD module uses Cartesian grid method with cut cell approach. The paper presents a numerical algorithm for coupling 1D and 3D thermal- hydraulic modules in the KORSAR code. The combined pressure field is calculated by the multigrid method. The performance efficiency of the algorithm for coupling 1D and 3D modules was demonstrated by solving the benchmark problem of mixing cold and hot flows in a T-junction.

  2. Adaptation of HAMMER computer code to CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Pinheiro, A.M.B.S.; Nair, R.P.K.

    1982-01-01

    The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.) [pt

  3. Spatial neutron kinetic module of ROSA code

    International Nuclear Information System (INIS)

    Cherezov, A.L.; Shchukin, N.V.

    2009-01-01

    A spatial neutron kinetic module was developed for computer code ROSA. The paper describes a numerical scheme used in the module for resolving neutron kinetic equations. Analytical integration for delayed neutrons emitters method and direct numerical integration method (Gear's method) were analyzed. The two methods were compared on their efficiency and accuracy. Both methods were verified with test problems. The results obtained in the verification studies were presented [ru

  4. Hierarchical surface code for network quantum computing with modules of arbitrary size

    Science.gov (United States)

    Li, Ying; Benjamin, Simon C.

    2016-10-01

    The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.

  5. Modulation of neuronal dynamic range using two different adaptation mechanisms

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The capability of neurons to discriminate between intensity of external stimulus is measured by its dynamic range. A larger dynamic range indicates a greater probability of neuronal survival. In this study, the potential roles of adaptation mechanisms (ion currents in modulating neuronal dynamic range were numerically investigated. Based on the adaptive exponential integrate-and-fire model, which includes two different adaptation mechanisms, i.e. subthreshold and suprathreshold (spike-triggered adaptation, our results reveal that the two adaptation mechanisms exhibit rather different roles in regulating neuronal dynamic range. Specifically, subthreshold adaptation acts as a negative factor that observably decreases the neuronal dynamic range, while suprathreshold adaptation has little influence on the neuronal dynamic range. Moreover, when stochastic noise was introduced into the adaptation mechanisms, the dynamic range was apparently enhanced, regardless of what state the neuron was in, e.g. adaptive or non-adaptive. Our model results suggested that the neuronal dynamic range can be differentially modulated by different adaptation mechanisms. Additionally, noise was a non-ignorable factor, which could effectively modulate the neuronal dynamic range.

  6. Exposure calculation code module for reactor core analysis: BURNER

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Cunningham, G.W.

    1979-02-01

    The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also provides user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.

  7. Exposure calculation code module for reactor core analysis: BURNER

    International Nuclear Information System (INIS)

    Vondy, D.R.; Cunningham, G.W.

    1979-02-01

    The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also provides user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules

  8. Module description of TOKAMAK equilibrium code MEUDAS

    International Nuclear Information System (INIS)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  9. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  10. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  11. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  12. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  13. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  14. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  15. Performance analysis of adaptive modulation for cognitive radios with opportunistic access

    KAUST Repository

    Chen, Yunfei

    2011-06-01

    The performance of adaptive modulation for cognitive radio with opportunistic access is analyzed by considering the effects of spectrum sensing and primary user traffic for Nakagami-m fading channels. Both the adaptive continuous rate scheme and the adaptive discrete rate scheme are considered. Numerical results show that spectrum sensing and primary user traffic cause considerable degradation to the bit error rate performance of adaptive modulation in a cognitive radio system with opportunistic access to the licensed channel. They also show that primary user traffic does not affect the link spectral efficiency performance of adaptive modulation, while the spectrum sensing degrades the link spectral efficiency performance. © 2011 IEEE.

  16. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  17. Module description of TOKAMAK equilibrium code MEUDAS

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  18. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  19. Modulation-Frequency-Specific Adaptation in Awake Auditory Cortex

    Science.gov (United States)

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A.; Schreiner, Christoph E.

    2015-01-01

    Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical “modulation masking,” in which the presentation of a modulated “masker” signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation. PMID:25878263

  20. Adaptive Modulation for a Downlink Multicast Channel in OFDMA Systems

    DEFF Research Database (Denmark)

    Wang, Haibo; Schwefel, Hans-Peter; Toftegaard, Thomas Skjødeberg

    2007-01-01

    In this paper we focus on adaptive modulation strategies for multicast service in orthogonal frequency division multiple access systems. A reward function has been defined as the optimization target, which includes both the average user throughput and bit error rate. We also developed an adaptive...... modulation strategy, namely local best reward strategy, to maximize this reward function. The performance of different modulation strategies are compared in different SNR distribution scenarios, and the optimum strategy in each scenario is suggested....

  1. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  2. Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation

    Science.gov (United States)

    Divsalar, D.; Raphaeli, D.

    2000-01-01

    Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.

  3. Motor modules during adaptation to walking in a powered ankle exoskeleton.

    Science.gov (United States)

    Jacobs, Daniel A; Koller, Jeffrey R; Steele, Katherine M; Ferris, Daniel P

    2018-01-03

    Modules of muscle recruitment can be extracted from electromyography (EMG) during motions, such as walking, running, and swimming, to identify key features of muscle coordination. These features may provide insight into gait adaptation as a result of powered assistance. The aim of this study was to investigate the changes (module size, module timing and weighting patterns) of surface EMG data during assisted and unassisted walking in an powered, myoelectric, ankle-foot orthosis (ankle exoskeleton). Eight healthy subjects wore bilateral ankle exoskeletons and walked at 1.2 m/s on a treadmill. In three training sessions, subjects walked for 40 min in two conditions: unpowered (10 min) and powered (30 min). During each session, we extracted modules of muscle recruitment via nonnegative matrix factorization (NNMF) from the surface EMG signals of ten muscles in the lower limb. We evaluated reconstruction quality for each muscle individually using R 2 and normalized root mean squared error (NRMSE). We hypothesized that the number of modules needed to reconstruct muscle data would be the same between conditions and that there would be greater similarity in module timings than weightings. Across subjects, we found that six modules were sufficient to reconstruct the muscle data for both conditions, suggesting that the number of modules was preserved. The similarity of module timings and weightings between conditions was greater then random chance, indicating that muscle coordination was also preserved. Motor adaptation during walking in the exoskeleton was dominated by changes in the module timings rather than module weightings. The segment number and the session number were significant fixed effects in a linear mixed-effect model for the increase in R 2 with time. Our results show that subjects walking in a exoskeleton preserved the number of modules and the coordination of muscles within the modules across conditions. Training (motor adaptation within the session and

  4. H.264 Layered Coded Video over Wireless Networks: Channel Coding and Modulation Constraints

    Directory of Open Access Journals (Sweden)

    Ghandi MM

    2006-01-01

    Full Text Available This paper considers the prioritised transmission of H.264 layered coded video over wireless channels. For appropriate protection of video data, methods such as prioritised forward error correction coding (FEC or hierarchical quadrature amplitude modulation (HQAM can be employed, but each imposes system constraints. FEC provides good protection but at the price of a high overhead and complexity. HQAM is less complex and does not introduce any overhead, but permits only fixed data ratios between the priority layers. Such constraints are analysed and practical solutions are proposed for layered transmission of data-partitioned and SNR-scalable coded video where combinations of HQAM and FEC are used to exploit the advantages of both coding methods. Simulation results show that the flexibility of SNR scalability and absence of picture drift imply that SNR scalability as modelled is superior to data partitioning in such applications.

  5. Calculus of the Power Spectral Density of Ultra Wide Band Pulse Position Modulation Signals Coded with Totally Flipped Code

    Directory of Open Access Journals (Sweden)

    DURNEA, T. N.

    2009-02-01

    Full Text Available UWB-PPM systems were noted to have a power spectral density (p.s.d. consisting of a continuous portion and a line spectrum, which is composed of energy components placed at discrete frequencies. These components are the major source of interference to narrowband systems operating in the same frequency interval and deny harmless coexistence of UWB-PPM and narrowband systems. A new code denoted as Totally Flipped Code (TFC is applied to them in order to eliminate these discrete spectral components. The coded signal transports the information inside pulse position and will have the amplitude coded to generate a continuous p.s.d. We have designed the code and calculated the power spectral density of the coded signals. The power spectrum has no discrete components and its envelope is largely flat inside the bandwidth with a maximum at its center and a null at D.C. These characteristics make this code suited for implementation in the UWB systems based on PPM-type modulation as it assures a continuous spectrum and keeps PPM modulation performances.

  6. Ultra high speed optical transmission using subcarrier-multiplexed four-dimensional LDPC-coded modulation.

    Science.gov (United States)

    Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted

    2010-09-13

    We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).

  7. Adaptation of radiation shielding code to space environment

    International Nuclear Information System (INIS)

    Okuno, Koichi; Hara, Akihisa

    1992-01-01

    Recently, the trend to the development of space has heightened. To the development of space, many problems are related, and as one of them, there is the protection from cosmic ray. The cosmic ray is the radiation having ultrahigh energy, and there was not the radiation shielding design code that copes with cosmic ray so far. Therefore, the high energy radiation shielding design code for accelerators was improved so as to cope with the peculiarity that cosmic ray possesses. Moreover, the calculation of the radiation dose equivalent rate in the moon base to which the countermeasures against cosmic ray were taken was simulated by using the improved code. As the important countermeasures for the safety protection from radiation, the covering with regolith is carried out, and the effect of regolith was confirmed by using the improved code. Galactic cosmic ray, solar flare particles, radiation belt, the adaptation of the radiation shielding code HERMES to space environment, the improvement of the three-dimensional hadron cascade code HETCKFA-2 and the electromagnetic cascade code EGS 4-KFA, and the cosmic ray simulation are reported. (K.I.)

  8. Joint nonbinary low-density parity-check codes and modulation diversity over fading channels

    Science.gov (United States)

    Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei

    2010-09-01

    A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels.

  9. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  10. Power adaptation for joint switched diversity and adaptive modulation schemes in spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2012-09-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using transmit power adaptation, switched transmit diversity, and adaptive modulation in order to improve the performance of existing switching efficient schemes (SES) and bandwidth efficient schemes (BES). Taking advantage of the channel reciprocity principle, we assume that the channel state information (CSI) of the interference link is available to the secondary transmitter. This information is then used by the secondary transmitter to adapt its transmit power, modulation constellation size, and used transmit branch. The goal of this joint adaptation is to minimize the average number of switched branches and the average system delay given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver. We analyze the proposed scheme in terms of the average number of branch switching, average delay, and we provide a closed-form expression of the average bit error rate (BER). We demonstrate through numerical examples that the proposed scheme provides a compromise between the SES and the BES schemes. © 2012 IEEE.

  11. Power adaptation for joint switched diversity and adaptive modulation schemes in spectrum sharing systems

    KAUST Repository

    Bouida, Zied; Tourki, Kamel; Ghrayeb, Ali A.; Qaraqe, Khalid A.; Alouini, Mohamed-Slim

    2012-01-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using transmit power adaptation, switched transmit diversity, and adaptive modulation in order to improve the performance of existing switching efficient schemes (SES) and bandwidth efficient schemes (BES). Taking advantage of the channel reciprocity principle, we assume that the channel state information (CSI) of the interference link is available to the secondary transmitter. This information is then used by the secondary transmitter to adapt its transmit power, modulation constellation size, and used transmit branch. The goal of this joint adaptation is to minimize the average number of switched branches and the average system delay given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver. We analyze the proposed scheme in terms of the average number of branch switching, average delay, and we provide a closed-form expression of the average bit error rate (BER). We demonstrate through numerical examples that the proposed scheme provides a compromise between the SES and the BES schemes. © 2012 IEEE.

  12. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  13. Joint Adaptive Modulation and Combining for Hybrid FSO/RF Systems

    KAUST Repository

    Rakia, Tamer

    2015-11-12

    In this paper, we present and analyze a new transmission scheme for hybrid FSO/RF communication system based on joint adaptive modulation and adaptive combining. Specifically, the data rate on the FSO link is adjusted in discrete manner according to the FSO link\\'s instantaneous received signal-to-noise-ratio (SNR). If the FSO link\\'s quality is too poor to maintain the target bit-error-rate, the system activates the RF link along with the FSO link. When the RF link is activated, simultaneous transmission of the same modulated data takes place on both links, where the received signals from both links are combined using maximal ratio combining scheme. In this case, the data rate of the system is adjusted according to the instantaneous combined SNRs. Novel analytical expression for the cumulative distribution function (CDF) of the received SNR for the proposed adaptive hybrid system is obtained. This CDF expression is used to study the spectral and outage performances of the proposed adaptive hybrid FSO/RF system. Numerical examples are presented to compare the performance of the proposed adaptive hybrid FSO/RF system with that of switch-over hybrid FSO/RF and FSO-only systems employing the same adaptive modulation schemes. © 2015 IEEE.

  14. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  15. Coding of amplitude-modulated signals in the cochlear nucleus of a grass frog

    Science.gov (United States)

    Bibikov, N. G.

    2002-07-01

    To study the mechanisms that govern the coding of temporal features of complex sound signals, responses of single neurons located in the dorsal nucleus of the medulla oblongata (the cochlear nucleus) of a curarized grass frog ( Rana temporaria) to pure tone bursts and amplitude modulated tone bursts with a modulation frequency of 20 Hz and modulation depths of 10 and 80% were recorded. The carrier frequency was equal to the characteristic frequency of a neuron, the average signal level was 20 30 dB above the threshold, and the signal duration was equal to ten full modulation periods. Of the 133 neurons studied, 129 neurons responded to 80% modulated tone bursts by discharges that were phase-locked with the envelope waveform. At this modulation depth, the best phase locking was observed for neurons with the phasic type of response to tone bursts. For tonic neurons with low characteristic frequencies, along with the reproduction of the modulation, phase locking with the carrier frequency of the signal was observed. At 10% amplitude modulation, phasic neurons usually responded to only the onset of a tone burst. Almost all tonic units showed a tendency to reproduce the envelope, although the efficiency of the reproduction was low, and for half of these neurons, it was below the reliability limit. Some neurons exhibited a more efficient reproduction of the weak modulation. For almost half of the neurons, a reliable improvement was observed in the phase locking of the response during the tone burst presentation (from the first to the tenth modulation period). The cooperative histogram of a set of neurons responding to 10% modulated tone bursts within narrow ranges of frequencies and intensities retains the information on the dynamics of the envelope variation. The data are compared with the results obtained from the study of the responses to similar signals in the acoustic midbrain center of the same object and also with the psychophysical effect of a differential

  16. The FORTRAN NALAP code adapted to a microcomputer compiler

    International Nuclear Information System (INIS)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  17. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  18. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    International Nuclear Information System (INIS)

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  19. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  20. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    International Nuclear Information System (INIS)

    Ganapol, Barry; Maldonodo, Ivan

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity

  1. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    Energy Technology Data Exchange (ETDEWEB)

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  2. Evaluation of four-dimensional nonbinary LDPC-coded modulation for next-generation long-haul optical transport networks.

    Science.gov (United States)

    Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B

    2012-04-09

    Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.

  3. Performance of Low-Density Parity-Check Coded Modulation

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.

  4. Nonlinear pre-coding apparatus of multi-antenna system, has pre-coding unit that extents original constellation points of modulated symbols to several constellation points by using limited perturbation vector

    DEFF Research Database (Denmark)

    2008-01-01

    A Coding/Modulating units (200-1-200-N) outputs modulated symbols by modulating coding bit streams based on certain modulation scheme. The limited perturbation vector is calculated by using distribution of perturbation vectors. The original constellation points of modulated symbols are extended t...

  5. Amplitude modulation reduces loudness adaptation to high-frequency tones.

    Science.gov (United States)

    Wynne, Dwight P; George, Sahara E; Zeng, Fan-Gang

    2015-07-01

    Long-term loudness perception of a sound has been presumed to depend on the spatial distribution of activated auditory nerve fibers as well as their temporal firing pattern. The relative contributions of those two factors were investigated by measuring loudness adaptation to sinusoidally amplitude-modulated 12-kHz tones. The tones had a total duration of 180 s and were either unmodulated or 100%-modulated at one of three frequencies (4, 20, or 100 Hz), and additionally varied in modulation depth from 0% to 100% at the 4-Hz frequency only. Every 30 s, normal-hearing subjects estimated the loudness of one of the stimuli played at 15 dB above threshold in random order. Without any amplitude modulation, the loudness of the unmodulated tone after 180 s was only 20% of the loudness at the onset of the stimulus. Amplitude modulation systematically reduced the amount of loudness adaptation, with the 100%-modulated stimuli, regardless of modulation frequency, maintaining on average 55%-80% of the loudness at onset after 180 s. Because the present low-frequency amplitude modulation produced minimal changes in long-term spectral cues affecting the spatial distribution of excitation produced by a 12-kHz pure tone, the present result indicates that neural synchronization is critical to maintaining loudness perception over time.

  6. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    Science.gov (United States)

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal

  7. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  8. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  9. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  10. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    Science.gov (United States)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  11. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  12. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Science.gov (United States)

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  13. Hybrid Strategies for Link Adaptation Exploiting Several Degrees of Freedom in OFDM Based Broadband Wireless Systems

    DEFF Research Database (Denmark)

    Das, Suvra S.; Rahman, Muhammad Imadur; Wang, Yuanye

    2007-01-01

    In orthogonal frequency division multiplexing (OFDM) systems, there are several degrees of freedom in time and frequency domain, such as, sub-band size, forward error control coding (FEC) rate, modulation order, power level, modulation adaptation interval, coding rate adaptation interval and powe...... of the link parameters based on the channel conditions would lead to highly complex systems with high overhead. Hybrid strategies to vary the adaptation rates to tradeoff achievable efficiency and complexity are presented in this work....

  14. MODLIB, library of Fortran modules for nuclear reaction codes

    International Nuclear Information System (INIS)

    Talou, Patrick

    2006-01-01

    1 - Description of program or function: ModLib is a library of Fortran (90-compatible) modules to be used in existing and future nuclear reaction codes. The development of the library is an international effort being undertaken under the auspices of the long-term Subgroup A of the OECD/NEA Working Party on Evaluation and Cooperation. The aim is to constitute a library of well-tested and well-documented pieces of codes that can be used with confidence in all our coding efforts. This effort will undoubtedly help avoid the duplication of work, and most certainly facilitate the very important inter-comparisons between existing codes. 2 - Methods: - Width f luctuations [Talou, Chadwick]: calculates width fluctuation correction factors (output) for a set of transmission coefficients (input). Three methods are available: HRTW, Moldauer, and Verbaarschot (also called GOE approach). So far, no distinction is made according to the type of the coefficients channels (particle emission, gamma-ray emission, fission). - Gamma s trength [Herman]: calculates gamma-ray transmission coefficients using a Giant Resonance formalism. - Level d ensity [Koning]: computes the Gilbert-Cameron-Ignatyuk formalism for the continuum nuclear level density. - CHECKR, FIZCON, INTER, PSYCHE, STANEF [Dunford]: these modules are used in the MODLIB project but are not included in this package. They are available from the NEA Data Bank Computer Program Service under Package Ids: CHECKR (USCD1208), FIZCON (USCD1209), INTER (USCD1212), PSYCHE (USCD1216), STANEF (USCD1218)

  15. Space-Time Turbo Trellis Coded Modulation for Wireless Data Communications

    Directory of Open Access Journals (Sweden)

    Welly Firmanto

    2002-05-01

    Full Text Available This paper presents the design of space-time turbo trellis coded modulation (ST turbo TCM for improving the bandwidth efficiency and the reliability of future wireless data networks. We present new recursive space-time trellis coded modulation (STTC which outperform feedforward STTC proposed in by Tarokh et al. (1998 and Baro et al. (2000 on slow and fast fading channels. A substantial improvement in performance can be obtained by constructing ST turbo TCM which consists of concatenated recursive STTC, decoded by iterative decoding algorithm. The proposed recursive STTC are used as constituent codes in this scheme. They have been designed to satisfy the design criteria for STTC on slow and fast fading channels, derived for systems with the product of transmit and receive antennas larger than 3. The proposed ST turbo TCM significantly outperforms the best known STTC on both slow and fast fading channels. The capacity of this scheme on fast fading channels is less than 3 dB away from the theoretical capacity bound for multi-input multi-output (MIMO channels.

  16. Application and analysis of performance of dqpsk advanced modulation format in spectral amplitude coding ocdma

    International Nuclear Information System (INIS)

    Memon, A.

    2015-01-01

    SAC (Spectral Amplitude Coding) is a technique of OCDMA (Optical Code Division Multiple Access) to encode and decode data bits by utilizing spectral components of the broadband source. Usually OOK (ON-Off-Keying) modulation format is used in this encoding scheme. To make SAC OCDMA network spectrally efficient, advanced modulation format of DQPSK (Differential Quaternary Phase Shift Keying) is applied, simulated and analyzed, m-sequence code is encoded in the simulated setup. Performance regarding various lengths of m-sequence code is also analyzed and displayed in the pictorial form. The results of the simulation are evaluated with the help of electrical constellation diagram, eye diagram and bit error rate graph. All the graphs indicate better transmission quality in case of advanced modulation format of DQPSK used in SAC OCDMA network as compared with OOK. (author)

  17. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Application and Analysis of Performance of DQPSK Advanced Modulation Format in Spectral Amplitude Coding OCDMA

    Directory of Open Access Journals (Sweden)

    Abdul Latif Memon

    2014-04-01

    Full Text Available SAC (Spectral Amplitude Coding is a technique of OCDMA (Optical Code Division Multiple Access to encode and decode data bits by utilizing spectral components of the broadband source. Usually OOK (ON-Off-Keying modulation format is used in this encoding scheme. To make SAC OCDMA network spectrally efficient, advanced modulation format of DQPSK (Differential Quaternary Phase Shift Keying is applied, simulated and analyzed. m-sequence code is encoded in the simulated setup. Performance regarding various lengths of m-sequence code is also analyzed and displayed in the pictorial form. The results of the simulation are evaluated with the help of electrical constellation diagram, eye diagram and bit error rate graph. All the graphs indicate better transmission quality in case of advanced modulation format of DQPSK used in SAC OCDMA network as compared with OOK

  19. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  20. Information rates of next-generation long-haul optical fiber systems using coded modulation

    NARCIS (Netherlands)

    Liga, G.; Alvarado, A.; Agrell, E.; Bayvel, P.

    2017-01-01

    A comprehensive study of the coded performance of long-haul spectrally-efficient WDM optical fiber transmission systems with different coded modulation decoding structures is presented. Achievable information rates are derived for three different square QAM formats and the optimal format is

  1. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  2. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    International Nuclear Information System (INIS)

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  3. Performance analysis of joint diversity combining, adaptive modulation, and power control schemes

    KAUST Repository

    Qaraqe, Khalid A.

    2011-01-01

    Adaptive modulation and diversity combining represent very important adaptive solutions for future generations of wireless communication systems. Indeed, in order to improve the performance and the efficiency of these systems, these two techniques have been recently used jointly in new schemes named joint adaptive modulation and diversity combining (JAMDC) schemes. Considering the problem of finding low hardware complexity, bandwidth-efficient, and processing-power efficient transmission schemes for a downlink scenario and capitalizing on some of these recently proposed JAMDC schemes, we propose and analyze in this paper three joint adaptive modulation, diversity combining, and power control (JAMDCPC) schemes where a constant-power variable-rate adaptive modulation technique is used with an adaptive diversity combining scheme and a common power control process. More specifically, the modulation constellation size, the number of combined diversity paths, and the needed power level are jointly determined to achieve the highest spectral efficiency with the lowest possible processing power consumption quantified in terms of the average number of combined paths, given the fading channel conditions and the required bit error rate (BER) performance. In this paper, the performance of these three JAMDCPC schemes is analyzed in terms of their spectral efficiency, processing power consumption, and error-rate performance. Selected numerical examples show that these schemes considerably increase the spectral efficiency of the existing JAMDC schemes with a slight increase in the average number of combined paths for the low signal-to-noise ratio range while maintaining compliance with the BER performance and a low radiated power which yields to a substantial decrease in interference to co-existing users and systems. © 2011 IEEE.

  4. ICPP - a collision probability module for the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1985-10-01

    The isotropic collision probability program (ICPP) is a module of the AUS neutronics code system which calculates first flight collision probabilities for neutrons in one-dimensional geometries and in clusters of rods. Neutron sources, including scattering, are assumed to be isotropic and to be spatially flat within each mesh interval. The module solves the multigroup collision probability equations for eigenvalue or fixed source problems

  5. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  6. Damper modules with adapted stiffness ratio

    Energy Technology Data Exchange (ETDEWEB)

    Sonnenburg, R.; Stretz, A. [ZF Sachs AG, Entwicklungszentrum, Schweinfurt (Germany)

    2011-07-15

    A mechanism for the excitation of piston rod vibrations in automotive damper modules is discussed by a simple model. An improved nonlinear model based on elasticity effects leads to good simulation results. It is shown theoretically and experimentally that the adaptation of the stiffness of the piston rod bushing to the ''stiffness'' of the damper force characteristic can eliminate the piston rod oscillations completely. (orig.)

  7. Performance analysis of adaptive modulation for cognitive radios with opportunistic access

    KAUST Repository

    Chen, Yunfei; Alouini, Mohamed-Slim; Tang, Liang

    2011-01-01

    The performance of adaptive modulation for cognitive radio with opportunistic access is analyzed by considering the effects of spectrum sensing and primary user traffic for Nakagami-m fading channels. Both the adaptive continuous rate scheme

  8. History-based Adaptive Modulation for a Downlink Multicast Channel in OFDMA systems

    DEFF Research Database (Denmark)

    Wang, Haibo; Schwefel, Hans-Peter; Toftegaard, Thomas Skjødeberg

    2008-01-01

    In this paper we investigated the adaptive modulation strategies for Multicast service in orthogonal frequency division multiple access systems. We defined a Reward function as the performance optimization target and developed adaptive modulation strategies to maximize this Reward function....... The proposed optimization algorithm varied the instantaneous BER constraint of each mobile Multicast receiver according to its individual cumulated BER, which resulted in a significant Reward gain....

  9. Interacting Brain Modules for Memory: An Adaptive Representations Architecture

    National Research Council Canada - National Science Library

    Gluck, Mark A

    2008-01-01

    ...) as a central system for creating optimal and adaptive stimulus representations, and then worked outwards from the hippocampal region to the brain systems that it modulates, including the cerebellum...

  10. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz

    2015-09-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed-form solutions for the energy-limited optimal power allocation and investigate the diversity gain of different STC-ARQ schemes. In addition, sufficient conditions are derived for the usefulness of ARQ in terms of energy-limited outage probability. The results show that, for a large range of feedback costs, the energy efficiency is substantially improved by the combination of ARQ and STC techniques if optimal power allocation is utilized. © 2014 IEEE.

  11. Joint switched transmit diversity and adaptive modulation in spectrum sharing systems

    KAUST Repository

    Qaraqe, Khalid A.; Bouida, Zied; Abdallah, Mohamed M.; Alouini, Mohamed-Slim

    2011-01-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using switched transmit diversity and adaptive modulation in order to minimize the average number of switched branches at the secondary

  12. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  13. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  14. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2 (RATCHET2)

    International Nuclear Information System (INIS)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-01-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  15. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2(RATCHET2)

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-07-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  16. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  17. Lunar Module 5 ascent stage being moved for mating with adapter

    Science.gov (United States)

    1969-01-01

    Interior view of the Kennedy Space Center's (KSC) Manned Spacecraft Operations Building showing Lunar Module 5 being moved from workstand for mating with its Spacecraft Lunar Module Adapter (SLA). LM-5 is scheduled to be flown on the Apollo 11 lunar landing mission.

  18. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  19. Performance analysis of joint diversity combining, adaptive modulation, and power control schemes

    KAUST Repository

    Qaraqe, Khalid A.; Bouida, Zied; Alouini, Mohamed-Slim

    2011-01-01

    Adaptive modulation and diversity combining represent very important adaptive solutions for future generations of wireless communication systems. Indeed, in order to improve the performance and the efficiency of these systems, these two techniques

  20. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  1. Fixed or adapted conditioning intensity for repeated conditioned pain modulation.

    Science.gov (United States)

    Hoegh, M; Petersen, K K; Graven-Nielsen, T

    2017-12-29

    Aims Conditioned pain modulation (CPM) is used to assess descending pain modulation through a test stimulation (TS) and a conditioning stimulation (CS). Due to potential carry-over effects, sequential CPM paradigms might alter the intensity of the CS, which potentially can alter the CPM-effect. This study aimed to investigate the difference between a fixed and adaptive CS intensity on CPM-effect. Methods On the dominant leg of 20 healthy subjects the cuff pressure detection threshold (PDT) was recorded as TS and the pain tolerance threshold (PTT) was assessed on the non-dominant leg for estimating the CS. The difference in PDT before and during CS defined the CPM-effect. The CPM-effect was assessed four times using a CS with intensities of 70% of baseline PTT (fixed) or 70% of PTT measured throughout the session (adaptive). Pain intensity of the conditioning stimulus was assessed on a numeric rating scale (NRS). Data were analyzed with repeated-measures ANOVA. Results No difference was found comparing the four PDTs assessed before CSs for the fixed and the adaptive paradigms. The CS pressure intensity for the adaptive paradigm was increasing during the four repeated assessments (P CPM-effect was higher using the fixed condition compared with the adaptive condition (P CPM paradigms using a fixed conditioning stimulus produced an increased CPM-effect compared with adaptive and increasing conditioning intensities.

  2. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  3. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  4. Overall simulation of a HTGR plant with the gas adapted MANTA code

    International Nuclear Information System (INIS)

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  5. Introducing an on-line adaptive procedure for prostate image guided intensity modulate proton therapy.

    Science.gov (United States)

    Zhang, M; Westerly, D C; Mackie, T R

    2011-08-07

    With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom

  6. Evaluation of a software module for adaptive treatment planning and re-irradiation.

    Science.gov (United States)

    Richter, Anne; Weick, Stefan; Krieger, Thomas; Exner, Florian; Kellner, Sonja; Polat, Bülent; Flentje, Michael

    2017-12-28

    The aim of this work is to validate the Dynamic Planning Module in terms of usability and acceptance in the treatment planning workflow. The Dynamic Planning Module was used for decision making whether a plan adaptation was necessary within one course of radiation therapy. The Module was also used for patients scheduled for re-irradiation to estimate the dose in the pretreated region and calculate the accumulated dose to critical organs at risk. During one year, 370 patients were scheduled for plan adaptation or re-irradiation. All patient cases were classified according to their treated body region. For a sub-group of 20 patients treated with RT for lung cancer, the dosimetric effect of plan adaptation during the main treatment course was evaluated in detail. Changes in tumor volume, frequency of re-planning and the time interval between treatment start and plan adaptation were assessed. The Dynamic Planning Tool was used in 20% of treated patients per year for both approaches nearly equally (42% plan adaptation and 58% re-irradiation). Most cases were assessed for the thoracic body region (51%) followed by pelvis (21%) and head and neck cases (10%). The sub-group evaluation showed that unintended plan adaptation was performed in 38% of the scheduled cases. A median time span between first day of treatment and necessity of adaptation of 17 days (range 4-35 days) was observed. PTV changed by 12 ± 12% on average (maximum change 42%). PTV decreased in 18 of 20 cases due to tumor shrinkage and increased in 2 of 20 cases. Re-planning resulted in a reduction of the mean lung dose of the ipsilateral side in 15 of 20 cases. The experience of one year showed high acceptance of the Dynamic Planning Module in our department for both physicians and medical physicists. The re-planning can potentially reduce the accumulated dose to the organs at risk and ensure a better target volume coverage. In the re-irradiation situation, the Dynamic Planning Tool was used to

  7. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  8. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  9. Development of additional module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation

    Energy Technology Data Exchange (ETDEWEB)

    Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)

    2007-07-01

    The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.

  10. Development of additional module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation

    International Nuclear Information System (INIS)

    Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N.

    2007-01-01

    The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes

  11. Development of a NSSS T/H Module for the YGN 1/2 NPP Simulator Using a Best-Estimate Code, RETRAN

    International Nuclear Information System (INIS)

    Seo, I. Y.; Lee, Y. K.; Jeun, G. D.; Suh, J. S.

    2005-01-01

    KEPRI(Korea Electric Power Research Institute) developed a realistic nuclear steam supply system thermal-hydraulic module, named ARTS code, based on the best-estimate code RETRAN for the improvement of the KNPEC(Korea Nuclear Plant Education Center) unit 2 full-scope simulator. In this work, we make a nuclear steam supply system thermal-hydraulic module for the YGN 1/2 nuclear power plant simulator using a practical application of a experience of ARTS code development. The ARTS code was developed based on RETRAN, which is a best estimate code developed by EPRI(Electric Power Research Institute) for various transient analyses of NPP(Nuclear Power Plants). Robustness and the real time calculation capability have been improved by simplifications, removing of discontinuities of the physical correlations of the RETRAN code and some other modifications. And its scope for the simulation has been extended by supplementation of new calculation modules such as a dedicated pressurizer relief tank model and a backup model. The supplement is developed so that users cannot recognize the model change from the main ARTS module

  12. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  13. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli; Ma, Hao; Aï ssa, Sonia

    2012-01-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU's packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU's packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  14. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  15. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    Science.gov (United States)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  16. The Spanish national health care-associated infection surveillance network (INCLIMECC): data summary January 1997 through December 2006 adapted to the new National Healthcare Safety Network Procedure-associated module codes.

    Science.gov (United States)

    Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente

    2009-12-01

    In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.

  17. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Directory of Open Access Journals (Sweden)

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  18. DANTSYS: A diffusion accelerated neutral particle transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  19. DANTSYS: A diffusion accelerated neutral particle transport code system

    International Nuclear Information System (INIS)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O'Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZΘ symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing

  20. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  1. Probability differently modulating the effects of reward and punishment on visuomotor adaptation.

    Science.gov (United States)

    Song, Yanlong; Smiley-Oyen, Ann L

    2017-12-01

    Recent human motor learning studies revealed that punishment seemingly accelerated motor learning but reward enhanced consolidation of motor memory. It is not evident how intrinsic properties of reward and punishment modulate the potentially dissociable effects of reward and punishment on motor learning and motor memory. It is also not clear what causes the dissociation of the effects of reward and punishment. By manipulating probability of distribution, a critical property of reward and punishment, the present study demonstrated that probability had distinct modulation on the effects of reward and punishment in adapting to a sudden visual rotation and consolidation of the adaptation memory. Specifically, two probabilities of monetary reward and punishment distribution, 50 and 100%, were applied during young adult participants adapting to a sudden visual rotation. Punishment and reward showed distinct effects on motor adaptation and motor memory. The group that received punishments in 100% of the adaptation trials adapted significantly faster than the other three groups, but the group that received rewards in 100% of the adaptation trials showed marked savings in re-adapting to the same rotation. In addition, the group that received punishments in 50% of the adaptation trials that were randomly selected also had savings in re-adapting to the same rotation. Sensitivity to sensory prediction error or difference in explicit process induced by reward and punishment may likely contribute to the distinct effects of reward and punishment.

  2. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  3. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Science.gov (United States)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  4. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Energy Technology Data Exchange (ETDEWEB)

    Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  5. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    International Nuclear Information System (INIS)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Nemergut, Daniel

    2017-01-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  6. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  7. Adaptive Modulation with Best User Selection over Non-Identical Nakagami Fading Channels

    KAUST Repository

    Rao, Anlei

    2012-09-08

    In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel capacity, spectral efficiency, and bit-error-rate (BER) for both constant-power variable-rate and variable-power variable-rate uncoded M-ary quadrature amplitude modulation (M-QAM) schemes. We also study the impact of time delay on the average BER of adaptive M-QAM. Selected numerical results show that the multiuser diversity brings a considerably better performance even over i.n.i.d. fading environments.

  8. CHAR and BURNMAC - burnup modules of the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1986-03-01

    In the AUS neutronics code system, the burnup module CHAR solves the nuclide depletion equations by an analytic technique in a number of spatial zones. CHAR is usually used as one component of a lattice burnup calculation but contains features which also make it suitable for some global burnup calculations. BURNMAC is a simple accounting module based on the assumption that cross sections for a rector zone depend only on irradiation. BURNMAC is used as one component of a global calculation in which burnup is achieved by interpolation in the cross sections produced from a previous lattice calculation

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE

  10. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  11. Using individual differences to test the role of temporal and place cues in coding frequency modulation.

    Science.gov (United States)

    Whiteford, Kelly L; Oxenham, Andrew J

    2015-11-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.

  12. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  13. Applications of the ARGUS code in accelerator physics

    International Nuclear Information System (INIS)

    Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.

    1993-01-01

    ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper

  14. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  15. Adaptive eLearning modules for cytopathology education: A review and approach.

    Science.gov (United States)

    Samulski, T Danielle; La, Teresa; Wu, Roseann I

    2016-11-01

    Clinical training imposes time and resource constraints on educators and learners, making it difficult to provide and absorb meaningful instruction. Additionally, innovative and personalized education has become an expectation of adult learners. Fortunately, the development of web-based educational tools provides a possible solution to these challenges. Within this review, we introduce the utility of adaptive eLearning platforms in pathology education. In addition to a review of the current literature, we provide the reader with a suggested approach for module creation, as well as a critical assessment of an available platform, based on our experience in creating adaptive eLearning modules for teaching basic concepts in gynecologic cytopathology. Diagn. Cytopathol. 2016;44:944-951. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  17. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  18. Development and preliminary verification of 2-D transport module of radiation shielding code ARES

    International Nuclear Information System (INIS)

    Zhang Penghe; Chen Yixue; Zhang Bin; Zang Qiyong; Yuan Longjun; Chen Mengteng

    2013-01-01

    The 2-D transport module of radiation shielding code ARES is two-dimensional neutron and radiation shielding code. The theory model was based on the first-order steady state neutron transport equation, adopting the discrete ordinates method to disperse direction variables. Then a set of differential equations can be obtained and solved with the source iteration method. The 2-D transport module of ARES was capable of calculating k eff and fixed source problem with isotropic or anisotropic scattering in x-y geometry. The theoretical model was briefly introduced and series of benchmark problems were verified in this paper. Compared with the results given by the benchmark, the maximum relative deviation of k eff is 0.09% and the average relative deviation of flux density is about 0.60% in the BWR cells benchmark problem. As for the fixed source problem with isotropic and anisotropic scattering, the results of the 2-D transport module of ARES conform with DORT very well. These numerical results of benchmark problems preliminarily demonstrate that the development process of the 2-D transport module of ARES is right and it is able to provide high precision result. (authors)

  19. Cross-layer combining of power control and adaptive modulation with truncated ARQ for cognitive radios

    Institute of Scientific and Technical Information of China (English)

    CHENG Shi-lun; YANG Zhen

    2008-01-01

    To maximize throughput and to satisfy users' requirements in cognitive radios, a cross-layer optimization problem combining adaptive modulation and power control at the physical layer and truncated automatic repeat request at the medium access control layer is proposed. Simulation results show the combination of power control, adaptive modulation, and truncated automatic repeat request can regulate transmitter powers and increase the total throughput effectively.

  20. Modeling of severe accident sequences with the new modules CESAR and DIVA of ASTEC system code

    International Nuclear Information System (INIS)

    Pignet, Sophie; Guillard, Gaetan; Barre, Francois; Repetto, Georges

    2003-01-01

    Systems of computer codes, so-called 'integral' codes, are being developed to simulate the scenario of a hypothetical severe accident in a light water reactor, from the initial event until the possible radiological release of fission products out of the containment. They couple the predominant physical phenomena that occur in the different reactor zones and simulate the actuation of safety systems by procedures and by operators. In order to allow to study a great number of scenarios, a compromise must be found between precision of results and calculation time: one day of accident time should take less than one day of real time to simulate on a PC computer. This search of compromise is a real challenge for such integral codes. The development of the ASTEC integral code was initiated jointly by IRSN and GRS as an international reference code. The latest version 1.0 of ASTEC, including the new modules CESAR and DIVA which model the behaviour of the reactor cooling system and the core degradation, is presented here. Validation of the modules and one plant application are described

  1. Cross-layer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements

    KAUST Repository

    Yang, Yuli

    2012-11-01

    In addressing the issue of taking full advantage of the shared spectrum under imposed limitations in a cognitive radio (CR) network, we exploit a cross-layer design for the communications of secondary users (SUs), which combines adaptive modulation and coding (AMC) at the physical layer with truncated automatic repeat request (ARQ) protocol at the data link layer. To achieve high spectral efficiency (SE) while maintaining a target packet loss probability (PLP), switching among different transmission modes is performed to match the time-varying propagation conditions pertaining to the secondary link. Herein, by minimizing the SU\\'s packet error rate (PER) with each transmission mode subject to the spectrum-sharing constraints, we obtain the optimal power allocation at the secondary transmitter (ST) and then derive the probability density function (pdf) of the received SNR at the secondary receiver (SR). Based on these statistics, the SU\\'s packet loss rate and average SE are obtained in closed form, considering transmissions over block-fading channels with different distributions. Our results quantify the relation between the performance of a secondary link exploiting the cross-layer-designed adaptive transmission and the interference inflicted on the primary user (PU) in CR networks. © 1967-2012 IEEE.

  2. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J. [Purdue Univ., West Lafayette, IN (United States). Dept. of Nuclear Engineering; Wang, W. [SCIENTECH, Inc., Rockville, MD (United States); Mousseau, V.A.; Ebert, D.D. [Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Regulatory Research

    1999-03-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model.

  3. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J.; Mousseau, V.A.; Ebert, D.D.

    1999-01-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model

  4. Validation of one-dimensional module of MARS 2.1 computer code by comparison with the RELAP5/MOD3.3 developmental assessment results

    International Nuclear Information System (INIS)

    Lee, Y. J.; Bae, S. W.; Chung, B. D.

    2003-02-01

    This report records the results of the code validation for the one-dimensional module of the MARS 2.1 thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 code development assessment problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS 2.1 code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The results suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  5. Automatic Modulation Classification of LFM and Polyphase-coded Radar Signals

    Directory of Open Access Journals (Sweden)

    S. B. S. Hanbali

    2017-12-01

    Full Text Available There are several techniques for detecting and classifying low probability of intercept radar signals such as Wigner distribution, Choi-Williams distribution and time-frequency rate distribution, but these distributions require high SNR. To overcome this problem, we propose a new technique for detecting and classifying linear frequency modulation signal and polyphase coded signals using optimum fractional Fourier transform at low SNR. The theoretical analysis and simulation experiments demonstrate the validity and efficiency of the proposed method.

  6. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation

    Science.gov (United States)

    Huang, Aiping; Tao, Linwei; Niu, Yilong

    2018-04-01

    In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.

  7. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    International Nuclear Information System (INIS)

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  8. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  9. A Brain Computer Interface for Robust Wheelchair Control Application Based on Pseudorandom Code Modulated Visual Evoked Potential

    DEFF Research Database (Denmark)

    Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan

    2015-01-01

    In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...

  10. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin

    2012-01-01

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  11. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    Science.gov (United States)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  12. Joint switched transmit diversity and adaptive modulation in spectrum sharing systems

    KAUST Repository

    Qaraqe, Khalid A.

    2011-01-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper an adaptive scheme using switched transmit diversity and adaptive modulation in order to minimize the average number of switched branches at the secondary transmitter while increasing the capacity of the secondary link. The proposed switching efficient scheme (SES) uses the scan and wait (SWC) combining technique where a transmission occurs only when a branch with an acceptable performance is found, otherwise data is buffered. In our scheme, the modulation constellation size and the used transmit branch are determined to achieve the highest spectral efficiency with a minimum processing power, given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver. Selected numerical examples show that the SES scheme minimizes the average number of switched branches for the average and the high secondary signal-to-noise ratio range. This improvement comes at the expense of a small delay introduced by the SWC technique. For reference, we also compare the performance of the SES scheme to the selection diversity scheme (SDS) where the best branch verifying the modulation mode and the interference constraint is always selected. © 2011 ICST.

  13. A study on climatic adaptation of dipteran mitochondrial protein coding genes

    Directory of Open Access Journals (Sweden)

    Debajyoti Kabiraj

    2017-10-01

    Full Text Available Diptera, the true flies are frequently found in nature and their habitat is found all over the world including Antarctica and Polar Regions. The number of documented species for order diptera is quite high and thought to be 14% of the total animal present in the earth [1]. Most of the study in diptera has focused on the taxa of economic and medical importance, such as the fruit flies Ceratitis capitata and Bactrocera spp. (Tephritidae, which are serious agricultural pests; the blowflies (Calliphoridae and oestrid flies (Oestridae, which can cause myiasis; the anopheles mosquitoes (Culicidae, are the vectors of malaria; and leaf-miners (Agromyzidae, vegetable and horticultural pests [2]. Insect mitochondrion consists of 13 protein coding genes, 22 tRNAs and 2 rRNAs, are the remnant portion of alpha-proteobacteria is responsible for simultaneous function of energy production and thermoregulation of the cell through the bi-genomic system thus different adaptability in different climatic condition might have compensated by complementary changes is the both genomes [3,4]. In this study we have collected complete mitochondrial genome and occurrence data of one hundred thirteen such dipteran insects from different databases and literature survey. Our understanding of the genetic basis of climatic adaptation in diptera is limited to the basic information on the occurrence location of those species and mito genetic factors underlying changes in conspicuous phenotypes. To examine this hypothesis, we have taken an approach of Nucleotide substitution analysis for 13 protein coding genes of mitochondrial DNA individually and combined by different software for monophyletic group as well as paraphyletic group of dipteran species. Moreover, we have also calculated codon adaptation index for all dipteran mitochondrial protein coding genes. Following this work, we have classified our sample organisms according to their location data from GBIF (https

  14. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin [POSCO Nuclear Technology, Seoul (Korea, Republic of)

    2013-05-15

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors.

  15. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    International Nuclear Information System (INIS)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin

    2013-01-01

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors

  16. Link adaptation performance evaluation for a MIMO-OFDM physical layer in a realistic outdoor environment

    OpenAIRE

    Han, C; Armour, SMD; Doufexi, A; Ng, KH; McGeehan, JP

    2006-01-01

    This paper presents a downlink performance analysis of a link adaptation (LA) algorithm applied to a MIMO-OFDM Physical Layer (PHY) which is a popular candidate for future generation cellular communication systems. The new LA algorithm attempts to maximize throughput and adaptation between various modulation and coding schemes in combination with both space-time block codes (STBC) and spatial multiplexing (SM) is based on knowledge of SNR and H matrix determinant; the parameters which are fou...

  17. WE-AB-204-11: Development of a Nuclear Medicine Dosimetry Module for the GPU-Based Monte Carlo Code ARCHER

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Stabin, M [Vanderbilt Univ Medical Ctr, Nashville, TN (United States)

    2015-06-15

    Purpose: To develop a nuclear medicine dosimetry module for the GPU-based Monte Carlo code ARCHER. Methods: We have developed a nuclear medicine dosimetry module for the fast Monte Carlo code ARCHER. The coupled electron-photon Monte Carlo transport kernel included in ARCHER is built upon the Dose Planning Method code (DPM). The developed module manages the radioactive decay simulation by consecutively tracking several types of radiation on a per disintegration basis using the statistical sampling method. Optimization techniques such as persistent threads and prefetching are studied and implemented. The developed module is verified against the VIDA code, which is based on Geant4 toolkit and has previously been verified against OLINDA/EXM. A voxelized geometry is used in the preliminary test: a sphere made of ICRP soft tissue is surrounded by a box filled with water. Uniform activity distribution of I-131 is assumed in the sphere. Results: The self-absorption dose factors (mGy/MBqs) of the sphere with varying diameters are calculated by ARCHER and VIDA respectively. ARCHER’s result is in agreement with VIDA’s that are obtained from a previous publication. VIDA takes hours of CPU time to finish the computation, while it takes ARCHER 4.31 seconds for the 12.4-cm uniform activity sphere case. For a fairer CPU-GPU comparison, more effort will be made to eliminate the algorithmic differences. Conclusion: The coupled electron-photon Monte Carlo code ARCHER has been extended to radioactive decay simulation for nuclear medicine dosimetry. The developed code exhibits good performance in our preliminary test. The GPU-based Monte Carlo code is developed with grant support from the National Institute of Biomedical Imaging and Bioengineering through an R01 grant (R01EB015478)

  18. Processing module operating methods, processing modules, and communications systems

    Science.gov (United States)

    McCown, Steven Harvey; Derr, Kurt W.; Moore, Troy

    2014-09-09

    A processing module operating method includes using a processing module physically connected to a wireless communications device, requesting that the wireless communications device retrieve encrypted code from a web site and receiving the encrypted code from the wireless communications device. The wireless communications device is unable to decrypt the encrypted code. The method further includes using the processing module, decrypting the encrypted code, executing the decrypted code, and preventing the wireless communications device from accessing the decrypted code. Another processing module operating method includes using a processing module physically connected to a host device, executing an application within the processing module, allowing the application to exchange user interaction data communicated using a user interface of the host device with the host device, and allowing the application to use the host device as a communications device for exchanging information with a remote device distinct from the host device.

  19. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  20. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Directory of Open Access Journals (Sweden)

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  1. Adaptive Modulation for DFIG and STATCOM With High-Voltage Direct Current Transmission.

    Science.gov (United States)

    Tang, Yufei; He, Haibo; Ni, Zhen; Wen, Jinyu; Huang, Tingwen

    2016-08-01

    This paper develops an adaptive modulation approach for power system control based on the approximate/adaptive dynamic programming method, namely, the goal representation heuristic dynamic programming (GrHDP). In particular, we focus on the fault recovery problem of a doubly fed induction generator (DFIG)-based wind farm and a static synchronous compensator (STATCOM) with high-voltage direct current (HVDC) transmission. In this design, the online GrHDP-based controller provides three adaptive supplementary control signals to the DFIG controller, STATCOM controller, and HVDC rectifier controller, respectively. The mechanism is to observe the system states and their derivatives and then provides supplementary control to the plant according to the utility function. With the GrHDP design, the controller can adaptively develop an internal goal representation signal according to the observed power system states, therefore, to achieve more effective learning and modulating. Our control approach is validated on a wind power integrated benchmark system with two areas connected by HVDC transmission lines. Compared with the classical direct HDP and proportional integral control, our GrHDP approach demonstrates the improved transient stability under system faults. Moreover, experiments under different system operating conditions with signal transmission delays are also carried out to further verify the effectiveness and robustness of the proposed approach.

  2. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    Science.gov (United States)

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Mistranslation: from adaptations to applications.

    Science.gov (United States)

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytical evaluation of adaptive-modulation-based opportunistic cognitive radio in nakagami-m fading channels

    KAUST Repository

    Chen, Yunfei; Alouini, Mohamed-Slim; Tang, Liang; Khan, Fahdahmed

    2012-01-01

    The performance of adaptive modulation for cognitive radio with opportunistic access is analyzed by considering the effects of spectrum sensing, primary user (PU) traffic, and time delay for Nakagami- m fading channels. Both the adaptive continuous rate scheme and the adaptive discrete rate scheme are considered. Numerical examples are presented to quantify the effects of spectrum sensing, PU traffic, and time delay for different system parameters. © 1967-2012 IEEE.

  5. Analytical evaluation of adaptive-modulation-based opportunistic cognitive radio in nakagami-m fading channels

    KAUST Repository

    Chen, Yunfei

    2012-09-01

    The performance of adaptive modulation for cognitive radio with opportunistic access is analyzed by considering the effects of spectrum sensing, primary user (PU) traffic, and time delay for Nakagami- m fading channels. Both the adaptive continuous rate scheme and the adaptive discrete rate scheme are considered. Numerical examples are presented to quantify the effects of spectrum sensing, PU traffic, and time delay for different system parameters. © 1967-2012 IEEE.

  6. Multiuser Diversity with Adaptive Modulation in Non-Identically Distributed Nakagami Fading Environments

    KAUST Repository

    Rao, Anlei; Alouini, Mohamed-Slim

    2012-01-01

    In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel

  7. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    International Nuclear Information System (INIS)

    Pin, Francois G.

    2002-01-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus, there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and

  8. Constructing a two bands optical code-division multiple-access network of bipolar optical access codecs using Walsh-coded liquid crystal modulators

    Science.gov (United States)

    Yen, Chih-Ta; Huang, Jen-Fa; Chih, Ping-En

    2014-08-01

    We propose and experimentally demonstrated the two bands optical code-division multiple-access (OCDMA) network over bipolar Walsh-coded liquid-crystal modulators (LCMs) and driven by green light and red light lasers. Achieving system performance depends on the construction of a decoder that implements a true bipolar correlation using only unipolar signals and intensity detection for each band. We took advantage of the phase delay characteristics of LCMs to construct a prototype optical coder/decoder (codec). Matched and unmatched Walsh signature codes were evaluated to detect correlations among multiuser data in the access network. By using LCMs, a red and green laser light source was spectrally encoded and the summed light dots were complementary decoded. Favorable contrast on auto- and cross-correlations indicates that binary information symbols can be properly recovered using a balanced photodetector.

  9. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong; Hong-Chuan, Yang; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2009-01-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  10. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong

    2009-11-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  11. Validation of One-Dimensional Module of MARS-KS1.2 Computer Code By Comparison with the RELAP5/MOD3.3/patch3 Developmental Assessment Results

    International Nuclear Information System (INIS)

    Bae, S. W.; Chung, B. D.

    2010-07-01

    This report records the results of the code validation for the one-dimensional module of the MARS-KS thermal hydraulics analysis code by means of result-comparison with the RELAP5/MOD3.3 computer code. For the validation calculations, simulations of the RELAP5 Code Developmental Assessment Problem, which consists of 22 simulation problems in 3 categories, have been selected. The results of the 3 categories of simulations demonstrate that the one-dimensional module of the MARS code and the RELAP5/MOD3.3 code are essentially the same code. This is expected as the two codes have basically the same set of field equations, constitutive equations and main thermal hydraulic models. The result suggests that the high level of code validity of the RELAP5/MOD3.3 can be directly applied to the MARS one-dimensional module

  12. Development and pilot testing of an online module for ethics education based on the Nigerian National Code for Health Research Ethics

    Science.gov (United States)

    2013-01-01

    Background The formulation and implementation of national ethical regulations to protect research participants is fundamental to ethical conduct of research. Ethics education and capacity are inadequate in developing African countries. This study was designed to develop a module for online training in research ethics based on the Nigerian National Code of Health Research Ethics and assess its ease of use and reliability among biomedical researchers in Nigeria. Methodology This was a three-phased evaluation study. Phase one involved development of an online training module based on the Nigerian Code of Health Research Ethics (NCHRE) and uploading it to the Collaborative Institutional Training Initiative (CITI) website while the second phase entailed the evaluation of the module for comprehensibility, readability and ease of use by 45 Nigerian biomedical researchers. The third phase involved modification and re-evaluation of the module by 30 Nigerian biomedical researchers and determination of test-retest reliability of the module using Cronbach’s alpha. Results The online module was easily accessible and comprehensible to 95% of study participants. There were significant differences in the pretest and posttest scores of study participants during the evaluation of the online module (p = 0.001) with correlation coefficients of 0.9 and 0.8 for the pretest and posttest scores respectively. The module also demonstrated excellent test-retest reliability and internal consistency as shown by Cronbach’s alpha coefficients of 0.92 and 0.84 for the pretest and posttest respectively. Conclusion The module based on the Nigerian Code was developed, tested and made available online as a valuable tool for training in cultural and societal relevant ethical principles to orient national and international biomedical researchers working in Nigeria. It would complement other general research ethics and Good Clinical Practice modules. Participants suggested that awareness of the

  13. Development and pilot testing of an online module for ethics education based on the Nigerian National Code for Health Research Ethics

    Directory of Open Access Journals (Sweden)

    Ogunrin Olubunmi A

    2013-01-01

    Full Text Available Abstract Background The formulation and implementation of national ethical regulations to protect research participants is fundamental to ethical conduct of research. Ethics education and capacity are inadequate in developing African countries. This study was designed to develop a module for online training in research ethics based on the Nigerian National Code of Health Research Ethics and assess its ease of use and reliability among biomedical researchers in Nigeria. Methodology This was a three-phased evaluation study. Phase one involved development of an online training module based on the Nigerian Code of Health Research Ethics (NCHRE and uploading it to the Collaborative Institutional Training Initiative (CITI website while the second phase entailed the evaluation of the module for comprehensibility, readability and ease of use by 45 Nigerian biomedical researchers. The third phase involved modification and re-evaluation of the module by 30 Nigerian biomedical researchers and determination of test-retest reliability of the module using Cronbach’s alpha. Results The online module was easily accessible and comprehensible to 95% of study participants. There were significant differences in the pretest and posttest scores of study participants during the evaluation of the online module (p = 0.001 with correlation coefficients of 0.9 and 0.8 for the pretest and posttest scores respectively. The module also demonstrated excellent test-retest reliability and internal consistency as shown by Cronbach’s alpha coefficients of 0.92 and 0.84 for the pretest and posttest respectively. Conclusion The module based on the Nigerian Code was developed, tested and made available online as a valuable tool for training in cultural and societal relevant ethical principles to orient national and international biomedical researchers working in Nigeria. It would complement other general research ethics and Good Clinical Practice modules. Participants

  14. Development and pilot testing of an online module for ethics education based on the Nigerian National Code for Health Research Ethics.

    Science.gov (United States)

    Ogunrin, Olubunmi A; Ogundiran, Temidayo O; Adebamowo, Clement

    2013-01-02

    The formulation and implementation of national ethical regulations to protect research participants is fundamental to ethical conduct of research. Ethics education and capacity are inadequate in developing African countries. This study was designed to develop a module for online training in research ethics based on the Nigerian National Code of Health Research Ethics and assess its ease of use and reliability among biomedical researchers in Nigeria. This was a three-phased evaluation study. Phase one involved development of an online training module based on the Nigerian Code of Health Research Ethics (NCHRE) and uploading it to the Collaborative Institutional Training Initiative (CITI) website while the second phase entailed the evaluation of the module for comprehensibility, readability and ease of use by 45 Nigerian biomedical researchers. The third phase involved modification and re-evaluation of the module by 30 Nigerian biomedical researchers and determination of test-retest reliability of the module using Cronbach's alpha. The online module was easily accessible and comprehensible to 95% of study participants. There were significant differences in the pretest and posttest scores of study participants during the evaluation of the online module (p = 0.001) with correlation coefficients of 0.9 and 0.8 for the pretest and posttest scores respectively. The module also demonstrated excellent test-retest reliability and internal consistency as shown by Cronbach's alpha coefficients of 0.92 and 0.84 for the pretest and posttest respectively. The module based on the Nigerian Code was developed, tested and made available online as a valuable tool for training in cultural and societal relevant ethical principles to orient national and international biomedical researchers working in Nigeria. It would complement other general research ethics and Good Clinical Practice modules. Participants suggested that awareness of the online module should be increased through seminars

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Landers, N.F.; Petrie, L.M.; Knight, J.R. [Oak Ridge National Lab., TN (United States)] [and others

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries.

  16. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Control modules -- Volume 1, Revision 4

    International Nuclear Information System (INIS)

    Landers, N.F.; Petrie, L.M.; Knight, J.R.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. This manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for the functional module documentation, and Volume 3 for the documentation of the data libraries and subroutine libraries

  17. Information-Dispersion-Entropy-Based Blind Recognition of Binary BCH Codes in Soft Decision Situations

    Directory of Open Access Journals (Sweden)

    Yimeng Zhang

    2013-05-01

    Full Text Available A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.

  18. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  19. On network coding and modulation mapping for three-phase bidirectional relaying

    KAUST Repository

    Chang, Ronald Y.; Lin, Sian Jheng; Chung, Wei-Ho

    2015-01-01

    © 2015 IEEE. In this paper, we consider the network coding (NC) enabled three-phase protocol for information exchange between two users in a wireless two-way (bidirectional) relay network. Modulo-based (nonbinary) and XOR-based (binary) NC schemes are considered as information mixture schemes at the relay while all transmissions adopt pulse amplitude modulation (PAM). We first obtain the optimal constellation mapping at the relay that maximizes the decoding performance at the users for each NC scheme. Then, we compare the two NC schemes, each in conjunction with the optimal constellation mapping at the relay, in different conditions. Our results demonstrate that, in the low SNR regime, binary NC outperforms nonbinary NC with 4-PAM, while they have mixed performance with 8-PAM. This observation applies to quadrature amplitude modulation (QAM) composed of two parallel PAMs.

  20. On network coding and modulation mapping for three-phase bidirectional relaying

    KAUST Repository

    Chang, Ronald Y.

    2015-12-03

    © 2015 IEEE. In this paper, we consider the network coding (NC) enabled three-phase protocol for information exchange between two users in a wireless two-way (bidirectional) relay network. Modulo-based (nonbinary) and XOR-based (binary) NC schemes are considered as information mixture schemes at the relay while all transmissions adopt pulse amplitude modulation (PAM). We first obtain the optimal constellation mapping at the relay that maximizes the decoding performance at the users for each NC scheme. Then, we compare the two NC schemes, each in conjunction with the optimal constellation mapping at the relay, in different conditions. Our results demonstrate that, in the low SNR regime, binary NC outperforms nonbinary NC with 4-PAM, while they have mixed performance with 8-PAM. This observation applies to quadrature amplitude modulation (QAM) composed of two parallel PAMs.

  1. Adaptive Iterative Soft-Input Soft-Output Parallel Decision-Feedback Detectors for Asynchronous Coded DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2005-01-01

    Full Text Available The optimum and many suboptimum iterative soft-input soft-output (SISO multiuser detectors require a priori information about the multiuser system, such as the users' transmitted signature waveforms, relative delays, as well as the channel impulse response. In this paper, we employ adaptive algorithms in the SISO multiuser detector in order to avoid the need for this a priori information. First, we derive the optimum SISO parallel decision-feedback detector for asynchronous coded DS-CDMA systems. Then, we propose two adaptive versions of this SISO detector, which are based on the normalized least mean square (NLMS and recursive least squares (RLS algorithms. Our SISO adaptive detectors effectively exploit the a priori information of coded symbols, whose soft inputs are obtained from a bank of single-user decoders. Furthermore, we consider how to select practical finite feedforward and feedback filter lengths to obtain a good tradeoff between the performance and computational complexity of the receiver.

  2. Improvement and evaluation of debris coolability analysis module in severe accident analysis code SAMPSON using LIVE experiment

    International Nuclear Information System (INIS)

    Wei, Hongyang; Erkan, Nejdet; Okamoto, Koji; Gaus-Liu, Xiaoyang; Miassoedov, Alexei

    2017-01-01

    Highlights: • Debris coolability analysis module in SAMPSON is validated. • Model for heat transfer between melt pool and pressure vessel wall is modified. • Modified debris coolability analysis module is found to give reasonable results. - Abstract: The purpose of this work is to validate the debris coolability analysis (DCA) module in the severe accident analysis code SAMPSON by simulating the first steady stage of the LIVE-L4 test. The DCA module is used for debris cooling in the lower plenum and for predicting the safety margin of present reactor vessels during a severe accident. In the DCA module, the spreading and cooling of molten debris, gap cooling, heating of a three-dimensional reactor vessel, and natural convection heat transfer are all considered. The LIVE experiment is designed to investigate the formation and stability of melt pools in a reactor pressure vessel (RPV). By comparing the simulation results and experimental data in terms of the average melt pool temperature and the heat flux along the vessel wall, a bug is found in the code and the model for the heat transfer between the melt pool and RPV wall is modified. Based on the Asfia–Dhir and Jahn–Reineke correlations, the modified version of the DCA module is found to give reasonable results for the average melt pool temperature, crust thickness in the steady state, and crust growth rate.

  3. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  4. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  5. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  6. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  7. Specificity of the Human Frequency Following Response for Carrier and Modulation Frequency Assessed Using Adaptation.

    Science.gov (United States)

    Gockel, Hedwig E; Krugliak, Alexandra; Plack, Christopher J; Carlyon, Robert P

    2015-12-01

    The frequency following response (FFR) is a scalp-recorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed.

  8. Impact of Self-Interference on the Performance of Joint Partial RAKE Receiver and Adaptive Modulation

    KAUST Repository

    Nam, Sung Sik

    2016-11-23

    In this paper, we investigate the impact of self-interference on the performance of a joint partial RAKE (PRAKE) receiver and adaptive modulation over both independent and identically distributed and independent but non-identically distributed Rayleigh fading channels. To better observe the impact of self-interference, our approach starts from considering the signal to interference plus noise ratio. Specifically, we accurately analyze the outage probability, the average spectral efficiency, and the average bit error rate as performance measures in the presence of self-interference. Several numerical and simulation results are selected to present the performance of the joint PRAKE receiver and adaptive modulation subject to self-interference.

  9. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  10. A Simple Differential Modulation Scheme for Quasi-Orthogonal Space-Time Block Codes with Partial Transmit Diversity

    Directory of Open Access Journals (Sweden)

    Lingyang Song

    2007-04-01

    Full Text Available We report a simple differential modulation scheme for quasi-orthogonal space-time block codes. A new class of quasi-orthogonal coding structures that can provide partial transmit diversity is presented for various numbers of transmit antennas. Differential encoding and decoding can be simplified for differential Alamouti-like codes by grouping the signals in the transmitted matrix and decoupling the detection of data symbols, respectively. The new scheme can achieve constant amplitude of transmitted signals, and avoid signal constellation expansion; in addition it has a linear signal detector with very low complexity. Simulation results show that these partial-diversity codes can provide very useful results at low SNR for current communication systems. Extension to more than four transmit antennas is also considered.

  11. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  12. An adaptative finite element method for turbulent flow simulations

    International Nuclear Information System (INIS)

    Arnoux-Guisse, F.; Bonnin, O.; Leal de Sousa, L.; Nicolas, G.

    1995-05-01

    After outlining the space and time discretization methods used in the N3S thermal hydraulic code developed at EDF/NHL, we describe the possibilities of the peripheral version, the Adaptative Mesh, which comprises two separate parts: the error indicator computation and the development of a module subdividing elements usable by the solid dynamics code ASTER and the electromagnetism code TRIFOU also developed by R and DD. The error indicators implemented in N3S are described. They consist of a projection indicator quantifying the space error in laminar or turbulent flow calculations and a Navier-Stokes residue indicator calculated on each element. The method for subdivision of triangles into four sub-triangles and tetrahedra into eight sub-tetrahedra is then presented with its advantages and drawbacks. It is illustrated by examples showing the efficiency of the module. The last concerns the 2 D case of flow behind a backward-facing step. (authors). 9 refs., 5 figs., 1 tab

  13. A dosimetric comparison of two-phase adaptive intensity-modulated radiotherapy for locally advanced nasopharyngeal cancer

    OpenAIRE

    Chitapanarux, Imjai; Chomprasert, Kittisak; Nobnaop, Wannapa; Wanwilairat, Somsak; Tharavichitkul, Ekasit; Jakrabhandu, Somvilai; Onchan, Wimrak; Traisathit, Patrinee; Van Gestel, Dirk

    2015-01-01

    The purpose of this investigation was to evaluate the potential dosimetric benefits of a two-phase adaptive intensity-modulated radiotherapy (IMRT) protocol for patients with locally advanced nasopharyngeal cancer (NPC). A total of 17 patients with locally advanced NPC treated with IMRT had a second computed tomography (CT) scan after 17 fractions in order to apply and continue the treatment with an adapted plan after 20 fractions. To simulate the situation without adaptation, a hybrid plan w...

  14. Supporting Dynamic Adaptive Streaming over HTTP in Wireless Meshed Networks using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...

  15. Development and validation of the fast doppler broadening module coupled within RMC code

    International Nuclear Information System (INIS)

    Yu Jiankai; Liang Jin'gang; Yu Ganglin; Wang Kan

    2015-01-01

    It is one of the efficient approach to reduce the memory consumption in Monte Carlo based reactor physical simulations by using the On-the-fly Doppler broadening for temperature dependent nuclear cross sections. RXSP is a nuclear cross sections processing code being developed by REAL team in Department of Engineering Physics in Tsinghua University, which has an excellent performance in Doppler broadening the temperature dependent continuous energy neutron cross sections. To meet the dual requirements of both accuracy and efficiency during the Monte Carlo simulations with many materials and many temperatures in it, this work enables the capability of on-the-fly pre-Doppler broadening cross sections during the neutron transport by coupling the Fast Doppler Broaden module in RXSP code embedded in the RMC code also being developed by REAL team in Tsinghua University. Additionally, the original OpenMP-based parallelism has been successfully converted into the MPI-based framework, being fully compatible with neutron transport in RMC code, which has achieved a vast parallel efficiency improvement. This work also provides a flexible approach to solve Monte Carlo based full core depletion calculation with many temperatures feedback in many isotopes. (author)

  16. Coded Shack-Hartmann Wavefront Sensor

    KAUST Repository

    Wang, Congli

    2016-12-01

    Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.

  17. Wind power within European grid codes: Evolution, status and outlook

    DEFF Research Database (Denmark)

    Vrana, Til Kristian; Flynn, Damian; Gomez-Lazaro, Emilio

    2018-01-01

    Grid codes are technical specifications that define the requirements for any facility connected to electricity grids. Wind power plants are increasingly facing system stability support requirements similar to conventional power stations, which is to some extent unavoidable, as the share of wind...... power in the generation mix is growing. The adaptation process of grid codes for wind power plants is not yet complete, and grid codes are expected to evolve further in the future. ENTSO-E is the umbrella organization for European TSOs, seen by many as a leader in terms of requirements sophistication...... is largely based on the definitions and provisions set out by ENTSO-E. The main European grid code requirements are outlined here, including also HVDC connections and DC-connected power park modules. The focus is on requirements that are considered particularly relevant for large wind power plants...

  18. Check and visualization of input geometry data using the geometrical module of the Monte Carlo code MCU: WWER-440 pressure vessel dosimetry benchmarks

    International Nuclear Information System (INIS)

    Gurevich, M.; Zaritsky, S.; Osmera, B.; Mikus, J.

    1997-01-01

    The Monte Carlo method gives the opportunity to conduct the calculations of neutron and photon flux without any simplifications of the 3-D geometry of the nuclear power and experimental devices. So, each graduated Monte Carlo code includes the combinatorial geometry module and tools for the geometry description giving a possibility to describe very complex systems with a number of hierarchy levels of the geometrical objects. Such codes as usual have special modules for the visual checking of geometry input information. These geometry opportunities could be used for all cases when the accurate 3-D description of the complex geometry becomes a necessity. The description (specification) of benchmark experiments is one of the such cases. Such accurate and uniform description detects all mistakes and ambiguities in the starting information of various kinds (drawings, reports etc.). Usually the quality of different parts of the starting information (generally produced by different persons during the different stages of the device elaboration and operation) is different. After using the above mentioned modules and tools, the resultant geometry description can be used as a standard for this device. One can automatically produce any type of the device figure. The detail geometry description can be used as input for different calculation models carrying out (not only for Monte Carlo). The application of that method to the description of the WWER-440 mock-ups is represented in the report. The mock-ups were created on the reactor LR-O (NRI) and the reactor vessel dosimetry benchmarks were developed on the basis of these mock-up experiments. The NCG-8 module of the Russian Monte Carlo code MCU was used. It is the combinatorial multilingual universal geometrical module. The MCU code was certified by Russian Nuclear Regulatory Body. Almost all figures for mentioned benchmarks specifications were made by the MCU visualization code. The problem of the automatic generation of the

  19. The optimal configuration of photovoltaic module arrays based on adaptive switching controls

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang; Lai, Pei-Lun; Liao, Bo-Jyun

    2015-01-01

    Highlights: • We propose a strategy for determining the optimal configuration of a PV array. • The proposed strategy was based on particle swarm optimization (PSO) method. • It can identify the optimal module array connection scheme in the event of shading. • It can also find the optimal connection of a PV array even in module malfunctions. - Abstract: This study proposes a strategy for determining the optimal configuration of photovoltaic (PV) module arrays in shading or malfunction conditions. This strategy was based on particle swarm optimization (PSO). If shading or malfunctions of the photovoltaic module array occur, the module array immediately undergoes adaptive reconfiguration to increase the power output of the PV power generation system. First, the maximal power generated at various irradiation levels and temperatures was recorded during normal array operation. Subsequently, the irradiation level and module temperature, regardless of operating conditions, were used to recall the maximal power previously recorded. This previous maximum was compared with the maximal power value obtained using the maximum power point tracker to assess whether the PV module array was experiencing shading or malfunctions. After determining that the array was experiencing shading or malfunctions, PSO was used to identify the optimal module array connection scheme in abnormal conditions, and connection switches were used to implement optimal array reconfiguration. Finally, experiments were conducted to assess the strategy for identifying the optimal reconfiguration of a PV module array in the event of shading or malfunctions

  20. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    Science.gov (United States)

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  1. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    Science.gov (United States)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator

  2. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  3. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  4. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  5. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    International Nuclear Information System (INIS)

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes

  6. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    Science.gov (United States)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  7. Tritium transport calculations for the IFMIF Tritium Release Test Module

    Energy Technology Data Exchange (ETDEWEB)

    Freund, Jana, E-mail: jana.freund@kit.edu; Arbeiter, Frederik; Abou-Sena, Ali; Franza, Fabrizio; Kondo, Keitaro

    2014-10-15

    Highlights: • Delivery of material data for the tritium balance in the IFMIF Tritium Release Test Module. • Description of the topological models in TMAP and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). • Computation of release of tritium from the breeder solid material into the purge gas. • Computation of the loss of tritium over the capsule wall, rig hull, container wall and purge gas return line. - Abstract: The IFMIF Tritium Release Test Module (TRTM) is projected to measure online the tritium release from breeder ceramics and beryllium pebble beds under high energy neutron irradiation. Tritium produced in the pebble bed of TRTM is swept out continuously by a purge gas flow, but can also permeate into the module's metal structures, and can be lost by permeation to the environment. According analyses on the tritium inventory are performed to support IFMIF plant safety studies, and to support the experiment planning. This paper describes the necessary elements for calculation of the tritium transport in the Tritium Release Test Module as follows: (i) applied equations for the tritium balance, (ii) material data from literature and (iii) the topological models and the computation of the five different cases; namely release of tritium from the breeder solid material into the purge gas, loss of tritium over the capsule wall, rig hull, container wall and purge gas return line in detail. The problem of tritium transport in the TRTM has been studied and analyzed by the Tritium Migration Analysis Program (TMAP) and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). TMAP has been developed at INEEL and now exists in Version 7. FUS-TPC Code was written in MATLAB with the original purpose to study the tritium transport in Helium Cooled Lead Lithium (HCLL) blanket and in a later version the Helium Cooled Pebble Bed (HCPB) blanket by [6] (Franza, 2012). This code has been further modified to be applicable to the TRTM. Results from the

  8. Tritium transport calculations for the IFMIF Tritium Release Test Module

    International Nuclear Information System (INIS)

    Freund, Jana; Arbeiter, Frederik; Abou-Sena, Ali; Franza, Fabrizio; Kondo, Keitaro

    2014-01-01

    Highlights: • Delivery of material data for the tritium balance in the IFMIF Tritium Release Test Module. • Description of the topological models in TMAP and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). • Computation of release of tritium from the breeder solid material into the purge gas. • Computation of the loss of tritium over the capsule wall, rig hull, container wall and purge gas return line. - Abstract: The IFMIF Tritium Release Test Module (TRTM) is projected to measure online the tritium release from breeder ceramics and beryllium pebble beds under high energy neutron irradiation. Tritium produced in the pebble bed of TRTM is swept out continuously by a purge gas flow, but can also permeate into the module's metal structures, and can be lost by permeation to the environment. According analyses on the tritium inventory are performed to support IFMIF plant safety studies, and to support the experiment planning. This paper describes the necessary elements for calculation of the tritium transport in the Tritium Release Test Module as follows: (i) applied equations for the tritium balance, (ii) material data from literature and (iii) the topological models and the computation of the five different cases; namely release of tritium from the breeder solid material into the purge gas, loss of tritium over the capsule wall, rig hull, container wall and purge gas return line in detail. The problem of tritium transport in the TRTM has been studied and analyzed by the Tritium Migration Analysis Program (TMAP) and the adapted fusion-devoted Tritium Permeation Code (FUS-TPC). TMAP has been developed at INEEL and now exists in Version 7. FUS-TPC Code was written in MATLAB with the original purpose to study the tritium transport in Helium Cooled Lead Lithium (HCLL) blanket and in a later version the Helium Cooled Pebble Bed (HCPB) blanket by [6] (Franza, 2012). This code has been further modified to be applicable to the TRTM. Results from the

  9. Space Time – Track Circuits with Trellis Code Modulation

    Directory of Open Access Journals (Sweden)

    Marius Enulescu

    2017-07-01

    Full Text Available The track circuits are very important equipments used in the railway transportation system. Today these are used to send vital information, to the running train, in the same time with the integrity checking of the rail. The actual track circuits have a small problem due to the use of the same transmission medium by the signals containing vital information and the return traction current, the running track rails. But this small problem can produce big disturbances in the train circulation, especially in the rush hours. To improve the data transmission to the train on-board equipment, the implementation of new track circuits using new communication technology were studied. This technology is used by the mobile and satellite communications and applies the principle of diversity encoding both time and space through the use of multiple transmission points of the track circuit signal for telegram which is sent to the train. Since this implementation does not satisfy the intended purpose, other modern communication principles such as 8PSK signals modulation and encoding using Trellis Coded Modulation were developed. This new track circuit aims to solve the problems which appeared in the current operation of track circuits and theoretically manages to transmit vital information to the train on board equipment without being affected by disturbances in electric traction transport systems.

  10. EDITAR: a module for reaction rate editing and cross-section averaging within the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1986-03-01

    The EDITAR module of the AUS neutronics code system edits one and two-dimensional flux data pools produced by other AUS modules to form reaction rates for materials and their constituent nuclides, and to average cross sections over space and energy. The module includes a Bsub(L) flux calculation for application to cell leakage. The STATUS data pool of the AUS system is used to enable the 'unsmearing' of fluxes and nuclide editing with minimal user input. The module distinguishes between neutron and photon groups, and printed reaction rates are formed accordingly. Bilinear weighting may be used to obtain material reactivity worths and to average cross sections. Bilinear weighting is at present restricted to diffusion theory leakage estimates made using mesh-average fluxes

  11. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  12. Information rates of probabilistically shaped coded modulation for a multi-span fiber-optic communication system with 64QAM

    Science.gov (United States)

    Fehenberger, Tobias

    2018-02-01

    This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.

  13. ETF system code: composition and applications

    International Nuclear Information System (INIS)

    Reid, R.L.; Wu, K.F.

    1980-01-01

    A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies, such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system

  14. An optimized cosine-modulated nonuniform filter bank design for subband coding of ECG signal

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2015-07-01

    Full Text Available A simple iterative technique for the design of nonuniform cosine modulated filter banks (CMFBS is presented in this paper. The proposed technique employs a single parameter for optimization. The nonuniform cosine modulated filter banks are derived by merging the adjacent filters of uniform cosine modulated filter banks. The prototype filter is designed with the aid of different adjustable window functions such as Kaiser, Cosh and Exponential, and by using the constrained equiripple finite impulse response (FIR digital filter design technique. In this method, either cut off frequency or passband edge frequency is varied in order to adjust the filter coefficients so that reconstruction error could be optimized/minimized to zero. Performance and effectiveness of the proposed method in terms of peak reconstruction error (PRE, aliasing distortion (AD, computational (CPU time, and number of iteration (NOI have been shown through the numerical examples and comparative studies. Finally, the technique is exploited for the subband coding of electrocardiogram (ECG and speech signals.

  15. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  16. Impact of Self-Interference on the Performance of Joint Partial RAKE Receiver and Adaptive Modulation

    KAUST Repository

    Nam, Sung Sik; Choi, Yungho; Alouini, Mohamed-Slim; Choi, Seyeong

    2016-01-01

    In this paper, we investigate the impact of self-interference on the performance of a joint partial RAKE (PRAKE) receiver and adaptive modulation over both independent and identically distributed and independent but non-identically distributed

  17. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  18. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    Science.gov (United States)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  19. Revision and Validation of a Culturally-Adapted Online Instructional Module Using Edmundson's CAP Model: A DBR Study

    Science.gov (United States)

    Tapanes, Marie A.

    2011-01-01

    In the present study, the Cultural Adaptation Process Model was applied to an online module to include adaptations responsive to the online students' culturally-influenced learning styles and preferences. The purpose was to provide the online learners with a variety of course material presentations, where the e-learners had the opportunity to…

  20. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  1. Benchmark studies of BOUT++ code and TPSMBI code on neutral transport during SMBI

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.H. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Z.H., E-mail: zhwang@swip.ac.cn [Southwestern Institute of Physics, Chengdu 610041 (China); Guo, W., E-mail: wfguo@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Ren, Q.L. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Sun, A.P.; Xu, M.; Wang, A.K. [Southwestern Institute of Physics, Chengdu 610041 (China); Xiang, N. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China)

    2017-06-09

    SMBI (supersonic molecule beam injection) plays an important role in tokamak plasma fuelling, density control and ELM mitigation in magnetic confinement plasma physics, which has been widely used in many tokamaks. The trans-neut module of BOUT++ code is the only large-scale parallel 3D fluid code used to simulate the SMBI fueling process, while the TPSMBI (transport of supersonic molecule beam injection) code is a recent developed 1D fluid code of SMBI. In order to find a method to increase SMBI fueling efficiency in H-mode plasma, especially for ITER, it is significant to first verify the codes. The benchmark study between the trans-neut module of BOUT++ code and the TPSMBI code on radial transport dynamics of neutral during SMBI has been first successfully achieved in both slab and cylindrical coordinates. The simulation results from the trans-neut module of BOUT++ code and TPSMBI code are consistent very well with each other. Different upwind schemes have been compared to deal with the sharp gradient front region during the inward propagation of SMBI for the code stability. The influence of the WENO3 (weighted essentially non-oscillatory) and the third order upwind schemes on the benchmark results has also been discussed. - Highlights: • A 1D model of SMBI has developed. • Benchmarks of BOUT++ and TPSMBI codes have first been finished. • The influence of the WENO3 and the third order upwind schemes on the benchmark results has also been discussed.

  2. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  3. Epigenetic codes programming class switch recombination

    Directory of Open Access Journals (Sweden)

    Bharat eVaidyanathan

    2015-09-01

    Full Text Available Class switch recombination imparts B cells with a fitness-associated adaptive advantage during a humoral immune response by using a precision-tailored DNA excision and ligation process to swap the default constant region gene of the antibody with a new one that has unique effector functions. This secondary diversification of the antibody repertoire is a hallmark of the adaptability of B cells when confronted with environmental and pathogenic challenges. Given that the nucleotide sequence of genes during class switching remains unchanged (genetic constraints, it is logical and necessary therefore, to integrate the adaptability of B cells to an epigenetic state, which is dynamic and can be heritably modulated before, after or even during an antibody-dependent immune response. Epigenetic regulation encompasses heritable changes that affect function (phenotype without altering the sequence information embedded in a gene, and include histone, DNA and RNA modifications. Here, we review current literature on how B cells use an epigenetic code language as a means to ensure antibody plasticity in light of pathogenic insults.

  4. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  5. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    Science.gov (United States)

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  6. Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer

    International Nuclear Information System (INIS)

    Ball, D.G.; Cheverton, R.D.

    1985-01-01

    The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P

  7. The Analysis and the Performance Simulation of the Capacity of Bit-interleaved Coded Modulation System

    Directory of Open Access Journals (Sweden)

    Hongwei ZHAO

    2014-09-01

    Full Text Available In this paper, the capacity of the BICM system over AWGN channels is first analyzed; the curves of BICM capacity versus SNR are also got by the Monte-Carlo simulations===?=== and compared with the curves of the CM capacity. Based on the analysis results, we simulate the error performances of BICM system with LDPC codes. Simulation results show that the capacity of BICM system with LDPC codes is enormously influenced by the mapping methods. Given a certain modulation method, the BICM system can obtain about 2-3 dB gain with Gray mapping compared with Non-Gray mapping. Meanwhile, the simulation results also demonstrate the correctness of the theory analysis.

  8. Coded aperture imaging: the modulation transfer function for uniformly redundant arrays

    International Nuclear Information System (INIS)

    Fenimore, E.E.

    1980-01-01

    Coded aperture imaging uses many pinholes to increase the SNR for intrinsically weak sources when the radiation can be neither reflected nor refracted. Effectively, the signal is multiplexed onto an image and then decoded, often by a computer, to form a reconstructed image. We derive the modulation transfer function (MTF) of such a system employing uniformly redundant arrays (URA). We show that the MTF of a URA system is virtually the same as the MTF of an individual pinhole regardless of the shape or size of the pinhole. Thus, only the location of the pinholes is important for optimum multiplexing and decoding. The shape and size of the pinholes can then be selected based on other criteria. For example, one can generate self-supporting patterns, useful for energies typically encountered in the imaging of laser-driven compressions or in soft x-ray astronomy. Such patterns contain holes that are all the same size, easing the etching or plating fabrication efforts for the apertures. A new reconstruction method is introduced called delta decoding. It improves the resolution capabilities of a coded aperture system by mitigating a blur often introduced during the reconstruction step

  9. Glucose modulates food-related salience coding of midbrain neurons in humans.

    Science.gov (United States)

    Ulrich, Martin; Endres, Felix; Kölle, Markus; Adolph, Oliver; Widenhorn-Müller, Katharina; Grön, Georg

    2016-12-01

    Although early rat studies demonstrated that administration of glucose diminishes dopaminergic midbrain activity, evidence in humans has been lacking so far. In the present functional magnetic resonance imaging study, glucose was intravenously infused in healthy human male participants while seeing images depicting low-caloric food (LC), high-caloric food (HC), and non-food (NF) during a food/NF discrimination task. Analysis of brain activation focused on the ventral tegmental area (VTA) as the origin of the mesolimbic system involved in salience coding. Under unmodulated fasting baseline conditions, VTA activation was greater during HC compared with LC food cues. Subsequent to infusion of glucose, this difference in VTA activation as a function of caloric load leveled off and even reversed. In a control group not receiving glucose, VTA activation during HC relative to LC cues remained stable throughout the course of the experiment. Similar treatment-specific patterns of brain activation were observed for the hypothalamus. The present findings show for the first time in humans that glucose infusion modulates salience coding mediated by the VTA. Hum Brain Mapp 37:4376-4384, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Assessing the Role of Place and Timing Cues in Coding Frequency and Amplitude Modulation as a Function of Age.

    Science.gov (United States)

    Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J

    2017-08-01

    Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.

  11. Pain adaptability in individuals with chronic musculoskeletal pain is not associated with conditioned pain modulation

    DEFF Research Database (Denmark)

    Wan, Dawn Wong Lit; Arendt-Nielsen, Lars; Wang, Kelun

    2018-01-01

    (MSK). CPTs at 2°C and 7°C were used to assess the status of pain adaptability in participants with either chronic non-specific low back pain or knee osteoarthritis. The participants' potency of conditioned pain modulation (CPM) and local inhibition were measured. The strengths of pain adaptability...... at both CPTs were highly correlated. PA and PNA did not differ in their demographics, pain thresholds from thermal and pressure stimuli, or potency of local inhibition or CPM. PA reached their maximum pain faster than PNA (t41=-2.76, p... days whereas PNA did not (F (6,246) = 3.01, p = 0.01). The dichotomy of pain adaptability exists in MSK patients. Consistent with the healthy human study, the strength of pain adaptability and potency of CPM are not related. Pain adaptability could be another form of endogenous pain inhibition which...

  12. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  13. Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti STBC in Different Modulation Schemes

    OpenAIRE

    Shams, Rifat Ara; Kabir, M. Hasnat; Ullah, Sheikh Enayet

    2012-01-01

    In this paper, the impact of Forward Error Correction (FEC) code namely Trellis code with interleaver on the performance of wavelet based MC-CDMA wireless communication system with the implementation of Alamouti antenna diversity scheme has been investigated in terms of Bit Error Rate (BER) as a function of Signal-to-Noise Ratio (SNR) per bit. Simulation of the system under proposed study has been done in M-ary modulation schemes (MPSK, MQAM and DPSK) over AWGN and Rayleigh fading channel inc...

  14. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  15. CAPACITY BUILDING FOR CLIMATE CHANGE ADAPTATION: MODULES FOR AGRICULTURAL EXTENSION CURRICULUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    B.O. Ogunbameru

    2013-02-01

    Full Text Available Basically, climate change refers to any change in climate overtime, generally caused by natural variability and/or human activities. It has great devastating impact, particularly on agriculture and by extrapolation on farmers and the national economy. The frontline agricultural extension workers are expected to be among the principal stakeholders to teach farmers how to cope with climate change. Consequently, there is a need to develop appropriate teaching package for the training of the frontline agricultural extension workers, based on the myriad of adaptation strategies and practices available in the literature. This paper synthesizes the rationale for capacity building in climate change and the adaptation or coping strategies. The modules (train-the-trainer for teaching agricultural extension workers and farmers are documented in the paper.

  16. Development and validation of a fuel performance analysis code

    International Nuclear Information System (INIS)

    Majalee, Aaditya V.; Chaturvedi, S.

    2015-01-01

    CAD has been developing a computer code 'FRAVIZ' for calculation of steady-state thermomechanical behaviour of nuclear reactor fuel rods. It contains four major modules viz., Thermal module, Fission Gas Release module, Material Properties module and Mechanical module. All these four modules are coupled to each other and feedback from each module is fed back to others to get a self-consistent evolution in time. The computer code has been checked against two FUMEX benchmarks. Modelling fuel performance in Advance Heavy Water Reactor would require additional inputs related to the fuel and some modification in the code.(author)

  17. Adaptation and selective information transmission in the cricket auditory neuron AN2.

    Directory of Open Access Journals (Sweden)

    Klaus Wimmer

    Full Text Available Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2 of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus-response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that "foreground" signals should be enhanced while "background" signals should be selectively suppressed. We test how adaptation changes the input-response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity.

  18. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  19. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  20. Research on verification and validation strategy of detonation fluid dynamics code of LAD2D

    Science.gov (United States)

    Wang, R. L.; Liang, X.; Liu, X. Z.

    2017-07-01

    The verification and validation (V&V) is an important approach in the software quality assurance of code in complex engineering application. Reasonable and efficient V&V strategy can achieve twice the result with half the effort. This article introduces the software-Lagrangian adaptive hydrodynamics code in 2D space (LAD2D), which is self-developed software in detonation CFD with plastic-elastic structure. The V&V strategy of this detonation CFD code is presented based on the foundation of V&V methodology for scientific software. The basic framework of the module verification and the function validation is proposed, composing the detonation fluid dynamics model V&V strategy of LAD2D.

  1. Novel security enhancement technique against eavesdropper for OCDMA system using 2-D modulation format with code switching scheme

    Science.gov (United States)

    Singh, Simranjit; Kaur, Ramandeep; Singh, Amanvir; Kaler, R. S.

    2015-03-01

    In this paper, security of the spectrally encoded-optical code division multiplexed access (OCDMA) system is enhanced by using 2-D (orthogonal) modulation technique. This is an effective approach for simultaneous improvement of the system capacity and security. Also, the results show that the hybrid modulation technique proved to be a better option to enhance the data confidentiality at higher data rates using minimum utilization of bandwidth in a multiuser environment. Further, the proposed system performance is compared with the current state-of-the-art OCDMA schemes.

  2. MOSRA-SRAC. Lattice calculation module of the modular code system for nuclear reactor analyses MOSRA

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    2015-10-01

    MOSRA-SRAC is a lattice calculation module of the Modular code System for nuclear Reactor Analyses (MOSRA). This module performs the neutron transport calculation for various types of fuel elements including existing light water reactors, research reactors, etc. based on the collision probability method with a set of the 200-group cross-sections generated from the Japanese Evaluated Nuclear Data Library JENDL-4.0. It has also a function of the isotope generation and depletion calculation for up to 234 nuclides in each fuel material in the lattice. In these ways, MOSRA-SRAC prepares the burn-up dependent effective microscopic and macroscopic cross-section data to be used in core calculations. A CD-ROM is attached as an appendix. (J.P.N.)

  3. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  4. Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding.

    Science.gov (United States)

    Wittevrongel, Benjamin; Van Wolputte, Elia; Van Hulle, Marc M

    2017-11-08

    When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer's occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.

  5. Validation of CESAR Thermal-hydraulic Module of ASTEC V1.2 Code on BETHSY Experiments

    Science.gov (United States)

    Tregoures, Nicolas; Bandini, Giacomino; Foucher, Laurent; Fleurot, Joëlle; Meloni, Paride

    The ASTEC V1 system code is being jointly developed by the French Institut de Radioprotection et Sûreté Nucléaire (IRSN) and the German Gesellschaft für Anlagen und ReaktorSicherheit (GRS) to address severe accident sequences in a nuclear power plant. Thermal-hydraulics in primary and secondary system is addressed by the CESAR module. The aim of this paper is to present the validation of the CESAR module, from the ASTEC V1.2 version, on the basis of well instrumented and qualified integral experiments carried out in the BETHSY facility (CEA, France), which simulates a French 900 MWe PWR reactor. Three tests have been thoroughly investigated with CESAR: the loss of coolant 9.1b test (OECD ISP N° 27), the loss of feedwater 5.2e test, and the multiple steam generator tube rupture 4.3b test. In the present paper, the results of the code for the three analyzed tests are presented in comparison with the experimental data. The thermal-hydraulic behavior of the BETHSY facility during the transient phase is well reproduced by CESAR: the occurrence of major events and the time evolution of main thermal-hydraulic parameters of both primary and secondary circuits are well predicted.

  6. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  7. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  8. CVD-associated non-coding RNA, ANRIL, modulates expression of atherogenic pathways in VSMC

    International Nuclear Information System (INIS)

    Congrains, Ada; Kamide, Kei; Katsuya, Tomohiro; Yasuda, Osamu; Oguro, Ryousuke; Yamamoto, Koichi; Ohishi, Mitsuru; Rakugi, Hiromi

    2012-01-01

    Highlights: ► ANRIL maps in the strongest susceptibility locus for cardiovascular disease. ► Silencing of ANRIL leads to altered expression of tissue remodeling-related genes. ► The effects of ANRIL on gene expression are splicing variant specific. ► ANRIL affects progression of cardiovascular disease by regulating proliferation and apoptosis pathways. -- Abstract: ANRIL is a newly discovered non-coding RNA lying on the strongest genetic susceptibility locus for cardiovascular disease (CVD) in the chromosome 9p21 region. Genome-wide association studies have been linking polymorphisms in this locus with CVD and several other major diseases such as diabetes and cancer. The role of this non-coding RNA in atherosclerosis progression is still poorly understood. In this study, we investigated the implication of ANRIL in the modulation of gene sets directly involved in atherosclerosis. We designed and tested siRNA sequences to selectively target two exons (exon 1 and exon 19) of the transcript and successfully knocked down expression of ANRIL in human aortic vascular smooth muscle cells (HuAoVSMC). We used a pathway-focused RT-PCR array to profile gene expression changes caused by ANRIL knock down. Notably, the genes affected by each of the siRNAs were different, suggesting that different splicing variants of ANRIL might have distinct roles in cell physiology. Our results suggest that ANRIL splicing variants play a role in coordinating tissue remodeling, by modulating the expression of genes involved in cell proliferation, apoptosis, extra-cellular matrix remodeling and inflammatory response to finally impact in the risk of cardiovascular disease and other pathologies.

  9. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    International Nuclear Information System (INIS)

    Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben

    2015-01-01

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H 2 are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark

  10. Development of computer code in PNC, 3

    International Nuclear Information System (INIS)

    Ohtaki, Akira; Ohira, Hiroaki

    1990-01-01

    Super-COPD, a code which is integrated by calculation modules, has been developed in order to evaluate kinds of dynamics of LMFBR plant by improving COPD. The code involves all models and its advanced models of COPD in module structures. The code makes it possible to simulate the system dynamics of LMFBR plant of any configurations and components. (author)

  11. Phase-coded microwave signal generation based on a single electro-optical modulator and its application in accurate distance measurement.

    Science.gov (United States)

    Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong

    2015-08-24

    A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.

  12. Accident consequence assessment code development

    International Nuclear Information System (INIS)

    Homma, T.; Togawa, O.

    1991-01-01

    This paper describes the new computer code system, OSCAAR developed for off-site consequence assessment of a potential nuclear accident. OSCAAR consists of several modules which have modeling capabilities in atmospheric transport, foodchain transport, dosimetry, emergency response and radiological health effects. The major modules of the consequence assessment code are described, highlighting the validation and verification of the models. (author)

  13. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 2: Impact Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Arimescu, Carmen; Kanyid, Beverly A.; Miley, Terri B.

    2001-01-01

    One activity of the Department of Energy?s Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that generate human, ecological, economic, and cultural impacts

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    International Nuclear Information System (INIS)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.; Bucholz, J.A.; Hermann, O.W.; Fraley, S.K.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries

  16. CRISPR/Cas and Cmr modules, mobility and evolution of adaptive immune systems

    DEFF Research Database (Denmark)

    Shah, Shiraz Ali; Garrett, Roger Antony

    2011-01-01

    CRISPR/Cas and CRISPR/Cmr immune machineries of archaea and bacteria provide an adaptive and effective defence mechanism directed specifically against viruses and plasmids. Present data suggest that both CRISPR/Cas and Cmr modules can behave like integral genetic elements. They tend to be located...... in the more variable regions of chromosomes and are displaced by genome shuffling mechanisms including transposition. CRISPR loci may be broken up and dispersed in chromosomes by transposons with the potential for creating genetic novelty. Both CRISPR/Cas and Cmr modules appear to exchange readily between...... the significant barriers imposed by their differing conjugative, transcriptional and translational mechanisms. There are parallels between the CRISPR crRNAs and eukaryal siRNAs, most notably to germ cell piRNAs which are directed, with the help of effector proteins, to silence or destroy transposons...

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E. [Oak Ridge National Lab., TN (United States)

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR.

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F9--F16 -- Volume 2, Part 2, Revision 4

    International Nuclear Information System (INIS)

    West, J.T.; Hoffman, T.J.; Emmett, M.B.; Childs, K.W.; Petrie, L.M.; Landers, N.F.; Bryan, C.B.; Giles, G.E.

    1995-04-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation, Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries. This volume discusses the following functional modules: MORSE-SGC; HEATING 7.2; KENO V.a; JUNEBUG-II; HEATPLOT-S; REGPLOT 6; PLORIGEN; and OCULAR

  19. Second-order statistics of colour codes modulate transformations that effectuate varying degrees of scene invariance and illumination invariance.

    Science.gov (United States)

    Mausfeld, Rainer; Andres, Johannes

    2002-01-01

    We argue, from an ethology-inspired perspective, that the internal concepts 'surface colours' and 'illumination colours' are part of the data format of two different representational primitives. Thus, the internal concept of 'colour' is not a unitary one but rather refers to two different types of 'data structure', each with its own proprietary types of parameters and relations. The relation of these representational structures is modulated by a class of parameterised transformations whose effects are mirrored in the idealised computational achievements of illumination invariance of colour codes, on the one hand, and scene invariance, on the other hand. Because the same characteristics of a light array reaching the eye can be physically produced in many different ways, the visual system, then, has to make an 'inference' whether a chromatic deviation of the space-averaged colour codes from the neutral point is due to a 'non-normal', ie chromatic, illumination or due to an imbalanced spectral reflectance composition. We provide evidence that the visual system uses second-order statistics of chromatic codes of a single view of a scene in order to modulate corresponding transformations. In our experiments we used centre surround configurations with inhomogeneous surrounds given by a random structure of overlapping circles, referred to as Seurat configurations. Each family of surrounds has a fixed space-average of colour codes, but differs with respect to the covariance matrix of colour codes of pixels that defines the chromatic variance along some chromatic axis and the covariance between luminance and chromatic channels. We found that dominant wavelengths of red-green equilibrium settings of the infield exhibited a stable and strong dependence on the chromatic variance of the surround. High variances resulted in a tendency towards 'scene invariance', low variances in a tendency towards 'illumination invariance' of the infield.

  20. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  1. Genome-wide occupancy profile of mediator and the Srb8-11 module reveals interactions with coding regions

    DEFF Research Database (Denmark)

    Zhu, Xuefeng; Wirén, Marianna; Sinha, Indranil

    2006-01-01

    Mediator exists in a free form containing the Med12, Med13, CDK8, and CycC subunits (the Srb8-11 module) and a smaller form, which lacks these four subunits and associates with RNA polymerase II (Pol II), forming a holoenzyme. We use chromatin immunoprecipitation (ChIP) and DNA microarrays...... to investigate genome-wide localization of Mediator and the Srb8-11 module in fission yeast. Mediator and the Srb8-11 module display similar binding patterns, and interactions with promoters and upstream activating sequences correlate with increased transcription activity. Unexpectedly, Mediator also interacts...... with the downstream coding region of many genes. These interactions display a negative bias for positions closer to the 5' ends of open reading frames (ORFs) and appear functionally important, because downregulation of transcription in a temperature-sensitive med17 mutant strain correlates with increased Mediator...

  2. Alertness modulates conflict adaptation and feature integration in an opposite way.

    Directory of Open Access Journals (Sweden)

    Peiduo Liu

    Full Text Available Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE and feature integration effect which can be observed as the repetition priming effect (RPE and feature overlap effect (FOE depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy.

  3. Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way

    Science.gov (United States)

    Chen, Jia; Huang, Xiting; Chen, Antao

    2013-01-01

    Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy. PMID:24250824

  4. A Dual-Responsive Nanocomposite toward Climate-Adaptable Solar Modulation for Energy-Saving Smart Windows.

    Science.gov (United States)

    Lee, Heng Yeong; Cai, Yufeng; Bi, Shuguang; Liang, Yen Nan; Song, Yujie; Hu, Xiao Matthew

    2017-02-22

    In this work, a novel fully autonomous photothermotropic material made by hybridization of the poly(N-isopropylacrylamide) (PNIPAM) hydrogel and antimony-tin oxide (ATO) is presented. In this photothermotropic system, the near-infrared (NIR)-absorbing ATO acts as nanoheater to induce the optical switching of the hydrogel. Such a new passive smart window is characterized by excellent NIR shielding, a photothermally activated switching mechanism, enhanced response speed, and solar modulation ability. Systems with 0, 5, 10, and 15 atom % Sb-doped ATO in PNIPAM were investigated, and it was found that a PNIPAM/ATO nanocomposite is able to be photothermally activated. The 10 atom % Sb-doped PNIPAM/ATO exhibits the best response speed and solar modulation ability. Different film thicknesses and ATO contents will affect the response rate and solar modulation ability. Structural stability tests at 15 cycles under continuous exposure to solar irradiation at 1 sun intensity demonstrated the performance stability of such a photothermotropic system. We conclude that such a novel photothermotropic hybrid can be used as a new generation of autonomous passive smart windows for climate-adaptable solar modulation.

  5. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  6. Development of the module inspection system for new standardized radiation monitoring modules

    International Nuclear Information System (INIS)

    Furukawa, Masami; Shimizu, Kazuaki; Hiruta, Toshihito; Mizugaki, Toshio; Ohi, Yoshihiro; Chida, Tooru.

    1994-10-01

    This report mentions about the module inspection system which does the maintenance check of the monitoring modules adapted the new monitoring standard, as well as the result of the verification of the modules. The module inspection system is the automatic measurement system with the computer. The system can perform the functional and the characteristic examination of the monitoring modules, the calibration with radiation source and inspection report. In the verification of the monitoring module, three major items were tested, the adaptability for the new monitoring standard, the module functions and each characteristics. All items met the new monitoring standard. (author)

  7. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Science.gov (United States)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  8. CVD-associated non-coding RNA, ANRIL, modulates expression of atherogenic pathways in VSMC

    Energy Technology Data Exchange (ETDEWEB)

    Congrains, Ada; Kamide, Kei [Department of Geriatric Medicine and Nephrology, Osaka University Graduate School of Medicine (Japan); Katsuya, Tomohiro [Clinical Gene Therapy, Osaka University Graduate School of Medicine (Japan); Yasuda, Osamu [Department of Cardiovascular Clinical and Translational Research, Kumamoto University Hospital (Japan); Oguro, Ryousuke; Yamamoto, Koichi [Department of Geriatric Medicine and Nephrology, Osaka University Graduate School of Medicine (Japan); Ohishi, Mitsuru, E-mail: ohishi@geriat.med.osaka-u.ac.jp [Department of Geriatric Medicine and Nephrology, Osaka University Graduate School of Medicine (Japan); Rakugi, Hiromi [Department of Geriatric Medicine and Nephrology, Osaka University Graduate School of Medicine (Japan)

    2012-03-23

    Highlights: Black-Right-Pointing-Pointer ANRIL maps in the strongest susceptibility locus for cardiovascular disease. Black-Right-Pointing-Pointer Silencing of ANRIL leads to altered expression of tissue remodeling-related genes. Black-Right-Pointing-Pointer The effects of ANRIL on gene expression are splicing variant specific. Black-Right-Pointing-Pointer ANRIL affects progression of cardiovascular disease by regulating proliferation and apoptosis pathways. -- Abstract: ANRIL is a newly discovered non-coding RNA lying on the strongest genetic susceptibility locus for cardiovascular disease (CVD) in the chromosome 9p21 region. Genome-wide association studies have been linking polymorphisms in this locus with CVD and several other major diseases such as diabetes and cancer. The role of this non-coding RNA in atherosclerosis progression is still poorly understood. In this study, we investigated the implication of ANRIL in the modulation of gene sets directly involved in atherosclerosis. We designed and tested siRNA sequences to selectively target two exons (exon 1 and exon 19) of the transcript and successfully knocked down expression of ANRIL in human aortic vascular smooth muscle cells (HuAoVSMC). We used a pathway-focused RT-PCR array to profile gene expression changes caused by ANRIL knock down. Notably, the genes affected by each of the siRNAs were different, suggesting that different splicing variants of ANRIL might have distinct roles in cell physiology. Our results suggest that ANRIL splicing variants play a role in coordinating tissue remodeling, by modulating the expression of genes involved in cell proliferation, apoptosis, extra-cellular matrix remodeling and inflammatory response to finally impact in the risk of cardiovascular disease and other pathologies.

  9. Radio over fiber link with adaptive order n‐QAM optical phase modulated OFDM and digital coherent detection

    DEFF Research Database (Denmark)

    Arlunno, Valeria; Borkowski, Robert; Guerrero Gonzalez, Neil

    2011-01-01

    Successful digital coherent demodulation of asynchronous optical phase‐modulated adaptive order QAM (4, 16, and 64) orthogonal frequency division multiplexing signals is achieved by a single reconfigurable digital receiver after 78 km of optical deployed fiber transmission....

  10. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  11. LDPC-PPM Coding Scheme for Optical Communication

    Science.gov (United States)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  12. Enhanced attention amplifies face adaptation.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Multicarrier Spread Spectrum Modulation Schemes and Efficient FFT Algorithms for Cognitive Radio Systems

    Directory of Open Access Journals (Sweden)

    Mohandass Sundararajan

    2014-07-01

    Full Text Available Spread spectrum (SS and multicarrier modulation (MCM techniques are recognized as potential candidates for the design of underlay and interweave cognitive radio (CR systems, respectively. Direct Sequence Code Division Multiple Access (DS-CDMA is a spread spectrum technique generally used in underlay CR systems. Orthogonal Frequency Division Multiplexing (OFDM is the basic MCM technique, primarily used in interweave CR systems. There are other MCM schemes derived from OFDM technique, like Non-Contiguous OFDM, Spread OFDM, and OFDM-OQAM, which are more suitable for CR systems. Multicarrier Spread Spectrum Modulation (MCSSM schemes like MC-CDMA, MC-DS-CDMA and SS-MC-CDMA, combine DS-CDMA and OFDM techniques in order to improve the CR system performance and adaptability. This article gives a detailed survey of the various spread spectrum and multicarrier modulation schemes proposed in the literature. Fast Fourier Transform (FFT plays a vital role in all the multicarrier modulation techniques. The FFT part of the modem can be used for spectrum sensing. The performance of the FFT operator plays a crucial role in the overall performance of the system. Since the cognitive radio is an adaptive system, the FFT operator must also be adaptive for various input/output values, in order to save energy and time taken for execution. This article also includes the various efficient FFT algorithms proposed in the literature, which are suitable for CR systems.

  14. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    Energy Technology Data Exchange (ETDEWEB)

    Trambauer, K. [GRS, Garching (Germany)

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonable accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.

  15. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  16. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Razavi Rouzbeh

    2008-01-01

    Full Text Available Abstract Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  17. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Rouzbeh Razavi

    2007-12-01

    Full Text Available Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  18. HELIAS module development for systems codes

    Energy Technology Data Exchange (ETDEWEB)

    Warmer, F., E-mail: Felix.Warmer@ipp.mpg.de; Beidler, C.D.; Dinklage, A.; Egorov, K.; Feng, Y.; Geiger, J.; Schauer, F.; Turkin, Y.; Wolf, R.; Xanthopoulos, P.

    2015-02-15

    In order to study and design next-step fusion devices such as DEMO, comprehensive systems codes are commonly employed. In this work HELIAS-specific models are proposed which are designed to be compatible with systems codes. The subsequently developed models include: a geometry model based on Fourier coefficients which can represent the complex 3-D plasma shape, a basic island divertor model which assumes diffusive cross-field transport and high radiation at the X-point, and a coil model which combines scaling aspects based on the HELIAS 5-B reactor design in combination with analytic inductance and field calculations. In addition, stellarator-specific plasma transport is discussed. A strategy is proposed which employs a predictive confinement time scaling derived from 1-D neoclassical and 3-D turbulence simulations. This paper reports on the progress of the development of the stellarator-specific models while an implementation and verification study within an existing systems code will be presented in a separate work. This approach is investigated to ultimately allow one to conduct stellarator system studies, develop design points of HELIAS burning plasma devices, and to facilitate a direct comparison between tokamak and stellarator DEMO and power plant designs.

  19. Service Time Analysis for Secondary Packet Transmission with Adaptive Modulation

    KAUST Repository

    Wang, Wen-Jing; Usman, Muneer; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2017-01-01

    Cognitive radio communications can opportunistically access underutilized spectrum for emerging wireless applications. With interweave cognitive implementation, secondary user transmits only if primary user does not occupy the channel and waits for transmission otherwise. Therefore, secondary packet transmission involves both transmission time and waiting time. The resulting extended delivery time (EDT) is critical to the throughput analysis of secondary system. In this paper, we study the EDT of secondary packet transmission with adaptive modulation under interweave implementation to facilitate the delay and throughput analysis of such cognitive radio system. In particular, we propose an analytical framework to derive the probability density functions of EDT considering random-length transmission and waiting slots. We also present selected numerical results to illustrate the mathematical formulations and to verify our analytical approach.

  20. Service Time Analysis for Secondary Packet Transmission with Adaptive Modulation

    KAUST Repository

    Wang, Wen-Jing

    2017-05-12

    Cognitive radio communications can opportunistically access underutilized spectrum for emerging wireless applications. With interweave cognitive implementation, secondary user transmits only if primary user does not occupy the channel and waits for transmission otherwise. Therefore, secondary packet transmission involves both transmission time and waiting time. The resulting extended delivery time (EDT) is critical to the throughput analysis of secondary system. In this paper, we study the EDT of secondary packet transmission with adaptive modulation under interweave implementation to facilitate the delay and throughput analysis of such cognitive radio system. In particular, we propose an analytical framework to derive the probability density functions of EDT considering random-length transmission and waiting slots. We also present selected numerical results to illustrate the mathematical formulations and to verify our analytical approach.

  1. Comparison of 'system thermal-hydraulics-3 dimensional reactor kinetics' coupled calculations using the MARS 1D and 3D modules and the MASTER code

    International Nuclear Information System (INIS)

    Jung, J. J.; Joo, H. K.; Lee, W. J.; Ji, S. K.; Jung, B. D.

    2002-01-01

    KAERI has developed the coupled 'system thermal-hydraulics - 3 dimensional reactor kinetics' code, MARS/MASTER since 1998. However, there is a limitation in the existing MARS/MASTER code; that is, to perform the coupled calculations using MARS/MASTER, we have to utilize the hydrodynamic model and the heat structure model of the MARS '3D module'. In some transients, reactor kinetics behavior is strongly multi-dimensional, but core thermal-hydraulic behavior remains in one-dimensional manner. For efficient analysis of such transients, we coupled the MARS 1D module with MASTER. The new feature has been assessed by the 'OECD NEA Main Steam Line Break (MSLB) benchmark exercise III' simulations

  2. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  3. Predictive coding of music--brain responses to rhythmic incongruity.

    Science.gov (United States)

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction.

  4. Adaptation of computer code ALMOD 3.4 for safety analyses of Westighouse type NPPs and calculation of main feedwater loss

    International Nuclear Information System (INIS)

    Kordis, I.; Jerele, A.; Brajak, F.

    1986-01-01

    The paper presents theoretical foundations of ALMOD 3.4 code and modification done in order to adjust the model to westinghouse type NPP. test cases for verification of added modules functioning were done and loss of main feedwater (FW) transient at nominal power was analysed. (author)

  5. On the implementation of new technology modules for fusion reactor systems codes

    International Nuclear Information System (INIS)

    Franza, F.; Boccaccini, L.V.; Fisher, U.; Gade, P.V.; Heller, R.

    2015-01-01

    Highlights: • At KIT a new technology modules for systems code are under development. • A new algorithm for the definition of the main reactor's components is defined. • A new blanket model based on 1D neutronics analysis is described. • A new TF coil stress model based on 3D electromagnetic analysis is described. • The models were successfully benchmarked against more detailed models. - Abstract: In the frame of the pre-conceptual design of the next generation fusion power plant (DEMO), systems codes are being used from nearly 20 years. In such computational tools the main reactor components (e.g. plasma, blanket, magnets, etc.) are integrated in a unique computational algorithm and simulated by means of rather simplified mathematical models (e.g. steady state and zero dimensional models). The systems code tries to identify the main design parameters (e.g. major radius, net electrical power, toroidal field) and to make the reactor's requirements and constraints to be simultaneously accomplished. In fusion applications, requirements and constraints can be either of physics or technology kind. Concerning the latest category, at Karlsruhe Institute of Technology a new modelling activity has been recently launched aiming to develop improved models focusing on the main technology areas, such as neutronics, thermal-hydraulics, electromagnetics, structural mechanics, fuel cycle and vacuum systems. These activities started by developing: (1) a geometry model for the definition of poloidal profiles for the main reactors components, (2) a blanket model based on neutronics analyses and (3) a toroidal field coil model based on electromagnetic analysis, firstly focusing on the stresses calculations. The objective of this paper is therefore to give a short outline of these models.

  6. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  7. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    Directory of Open Access Journals (Sweden)

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  8. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  9. Development of a fuel depletion sensitivity calculation module for multi-cell problems in a deterministic reactor physics code system CBZ

    International Nuclear Information System (INIS)

    Chiba, Go; Kawamoto, Yosuke; Narabayashi, Tadashi

    2016-01-01

    Highlights: • A new functionality of fuel depletion sensitivity calculations is developed in a code system CBZ. • This is based on the generalized perturbation theory for fuel depletion problems. • The theory with a multi-layer depletion step division scheme is described. • Numerical techniques employed in actual implementation are also provided. - Abstract: A new functionality of fuel depletion sensitivity calculations is developed as one module in a deterministic reactor physics code system CBZ. This is based on the generalized perturbation theory for fuel depletion problems. The theory for fuel depletion problems with a multi-layer depletion step division scheme is described in detail. Numerical techniques employed in actual implementation are also provided. Verification calculations are carried out for a 3 × 3 multi-cell problem consisting of two different types of fuel pins. It is shown that the sensitivities of nuclide number densities after fuel depletion with respect to the nuclear data calculated by the new module agree well with reference sensitivities calculated by direct numerical differentiation. To demonstrate the usefulness of the new module, fuel depletion sensitivities in different multi-cell arrangements are compared and non-negligible differences are observed. Nuclear data-induced uncertainties of nuclide number densities obtained with the calculated sensitivities are also compared.

  10. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800 (United States); Almgren, Ann S.; Zhang, Weiqun [Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  11. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  12. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  13. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  14. Deep-space and near-Earth optical communications by coded orbital angular momentum (OAM) modulation.

    Science.gov (United States)

    Djordjevic, Ivan B

    2011-07-18

    In order to achieve multi-gigabit transmission (projected for 2020) for the use in interplanetary communications, the usage of large number of time slots in pulse-position modulation (PPM), typically used in deep-space applications, is needed, which imposes stringent requirements on system design and implementation. As an alternative satisfying high-bandwidth demands of future interplanetary communications, while keeping the system cost and power consumption reasonably low, in this paper, we describe the use of orbital angular momentum (OAM) as an additional degree of freedom. The OAM is associated with azimuthal phase of the complex electric field. Because OAM eigenstates are orthogonal the can be used as basis functions for N-dimensional signaling. The OAM modulation and multiplexing can, therefore, be used, in combination with other degrees of freedom, to solve the high-bandwidth requirements of future deep-space and near-Earth optical communications. The main challenge for OAM deep-space communication represents the link between a spacecraft probe and the Earth station because in the presence of atmospheric turbulence the orthogonality between OAM states is no longer preserved. We will show that in combination with LDPC codes, the OAM-based modulation schemes can operate even under strong atmospheric turbulence regime. In addition, the spectral efficiency of proposed scheme is N2/log2N times better than that of PPM.

  15. Pain Adaptability in Individuals With Chronic Musculoskeletal Pain Is Not Associated With Conditioned Pain Modulation.

    Science.gov (United States)

    Wan, Dawn Wong Lit; Arendt-Nielsen, Lars; Wang, Kelun; Xue, Charlie Changli; Wang, Yanyi; Zheng, Zhen

    2018-03-27

    Healthy humans can be divided into the pain adaptive (PA) and the pain nonadaptive (PNA) groups; PA showed a greater decrease in pain rating to a cold pressor test (CPT) than PNA. This study examined if the dichotomy of pain adaptability existed in individuals with chronic musculoskeletal pain. CPTs at 2°C and 7°C were used to assess the status of pain adaptability in participants with either chronic nonspecific low back pain or knee osteoarthritis. The participants' potency of conditioned pain modulation (CPM) and local inhibition were measured. The strengths of pain adaptability at both CPTs were highly correlated. PA and PNA did not differ in their demographic characteristics, pain thresholds from thermal and pressure stimuli, or potency of local inhibition or CPM. PA reached their maximum pain faster than PNA (t 41 = -2.76, P adaptability exists in musculoskeletal pain patients. Consistent with the healthy human study, the strength of pain adaptability and potency of CPM are not related. Pain adaptability could be another form of endogenous pain inhibition of which clinical implication is yet to be understood. The dichotomy of pain adaptability was identified in healthy humans. The current study confirms that this dichotomy also exists in individuals with chronic musculoskeletal pain, and could be reliably assessed with CPTs at 2°C and 7°C. Similar to the healthy human study, pain adaptability is not associated with CPM, and may reflect the temporal aspect of pain inhibition. Copyright © 2018 The American Pain Society. Published by Elsevier Inc. All rights reserved.

  16. Feedback of mechanical effectiveness induces adaptations in motor modules during cycling

    Science.gov (United States)

    De Marchis, Cristiano; Schmid, Maurizio; Bibbo, Daniele; Castronovo, Anna Margherita; D'Alessio, Tommaso; Conforto, Silvia

    2013-01-01

    Recent studies have reported evidence that the motor system may rely on a modular organization, even if this behavior has yet to be confirmed during motor adaptation. The aim of the present study is to investigate the modular motor control mechanisms underlying the execution of pedaling by untrained subjects in different biomechanical conditions. We use the muscle synergies framework to characterize the muscle coordination of 11 subjects pedaling under two different conditions. The first one consists of a pedaling exercise with a strategy freely chosen by the subjects (Preferred Pedaling Technique, PPT), while the second condition constrains the gesture by means of a real time visual feedback of mechanical effectiveness (Effective Pedaling Technique, EPT). Pedal forces, recorded using a pair of instrumented pedals, were used to calculate the Index of Effectiveness (IE). EMG signals were recorded from eight muscles of the dominant leg and Non-negative Matrix Factorization (NMF) was applied for the extraction of muscle synergies. All the synergy vectors, extracted cycle by cycle for each subject, were pooled across subjects and conditions and underwent a 2-dimensional Sammon's non-linear mapping. Seven representative clusters were identified on the Sammon's projection, and the corresponding eight-dimensional synergy vectors were used to reconstruct the repertoire of muscle activation for all subjects and all pedaling conditions (VAF > 0.8 for each individual muscle pattern). Only 5 out of the 7 identified modules were used by the subjects during the PPT pedaling condition, while 2 additional modules were found specific for the pedaling condition EPT. The temporal recruitment of three identified modules was highly correlated with IE. The structure of the identified modules was found similar to that extracted in other studies of human walking, partly confirming the existence of shared and task specific muscle synergies, and providing further evidence on the modularity

  17. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    Science.gov (United States)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  18. Functional Diets Modulate lncRNA-Coding RNAs and Gene Interactions in the Intestine of Rainbow Trout Oncorhynchus mykiss.

    Science.gov (United States)

    Núñez-Acuña, Gustavo; Détrée, Camille; Gallardo-Escárate, Cristian; Gonçalves, Ana Teresa

    2017-06-01

    The advent of functional genomics has sparked the interest in inferring the function of non-coding regions from the transcriptome in non-model species. However, numerous biological processes remain understudied from this perspective, including intestinal immunity in farmed fish. The aim of this study was to infer long non-coding RNA (lncRNAs) expression profiles in rainbow trout (Oncorhynchus mykiss) fed for 30 days with functional diets based on pre- and probiotics. For this, whole transcriptome sequencing was conducted through Illumina technology, and lncRNAs were mined to evaluate transcriptional activity in conjunction with known protein sequences. To detect differentially expressed transcripts, 880 novels and 9067 previously described O. mykiss lncRNAs were used. Expression levels and genome co-localization correlations with coding genes were also analyzed. Significant differences in gene expression were primarily found in the probiotic diet, which had a twofold downregulation of lncRNAs compared to other treatments. Notable differences by diet were also evidenced between the coding genes of distinct metabolic processes. In contrast, genome co-localization of lncRNAs with coding genes was similar for all diets. This study contributes novel knowledge regarding lncRNAs in fish, suggesting key roles in salmons fed with in-feed additives with the capacity to modulate the intestinal homeostasis and host health.

  19. On the implementation of new technology modules for fusion reactor systems codes

    Energy Technology Data Exchange (ETDEWEB)

    Franza, F., E-mail: fabrizio.franza@kit.edu [Institute of Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany); Boccaccini, L.V.; Fisher, U. [Institute of Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany); Gade, P.V.; Heller, R. [Institute for Technical Physics, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany)

    2015-10-15

    Highlights: • At KIT a new technology modules for systems code are under development. • A new algorithm for the definition of the main reactor's components is defined. • A new blanket model based on 1D neutronics analysis is described. • A new TF coil stress model based on 3D electromagnetic analysis is described. • The models were successfully benchmarked against more detailed models. - Abstract: In the frame of the pre-conceptual design of the next generation fusion power plant (DEMO), systems codes are being used from nearly 20 years. In such computational tools the main reactor components (e.g. plasma, blanket, magnets, etc.) are integrated in a unique computational algorithm and simulated by means of rather simplified mathematical models (e.g. steady state and zero dimensional models). The systems code tries to identify the main design parameters (e.g. major radius, net electrical power, toroidal field) and to make the reactor's requirements and constraints to be simultaneously accomplished. In fusion applications, requirements and constraints can be either of physics or technology kind. Concerning the latest category, at Karlsruhe Institute of Technology a new modelling activity has been recently launched aiming to develop improved models focusing on the main technology areas, such as neutronics, thermal-hydraulics, electromagnetics, structural mechanics, fuel cycle and vacuum systems. These activities started by developing: (1) a geometry model for the definition of poloidal profiles for the main reactors components, (2) a blanket model based on neutronics analyses and (3) a toroidal field coil model based on electromagnetic analysis, firstly focusing on the stresses calculations. The objective of this paper is therefore to give a short outline of these models.

  20. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  1. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Science.gov (United States)

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  2. Coding conventions and principles for a National Land-Change Modeling Framework

    Science.gov (United States)

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  3. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  4. Non-binary coded modulation for FMF-based coherent optical transport networks

    Science.gov (United States)

    Lin, Changyu

    The Internet has fundamentally changed the way of modern communication. Current trends indicate that high-capacity demands are not going to be saturated anytime soon. From Shannon's theory, we know that information capacity is a logarithmic function of signal-to-noise ratio (SNR), but a linear function of the number of dimensions. Ideally, we can increase the capacity by increasing the launch power, however, due to the nonlinear characteristics of silica optical fibers that imposes a constraint on the maximum achievable optical-signal-to-noise ratio (OSNR). So there exists a nonlinear capacity limit on the standard single mode fiber (SSMF). In order to satisfy never ending capacity demands, there are several attempts to employ additional degrees of freedom in transmission system, such as few-mode fibers (FMFs), which can dramatically improve the spectral efficiency. On the other hand, for the given physical links and network equipment, an effective solution to relax the OSNR requirement is based on forward error correction (FEC), as the response to the demands of high speed reliable transmission. In this dissertation, we first discuss the model of FMF with nonlinear effects considered. Secondly, we simulate the FMF based OFDM system with various compensation and modulation schemes. Thirdly, we propose tandem-turbo-product nonbinary byte-interleaved coded modulation (BICM) for next-generation high-speed optical transmission systems. Fourthly, we study the Q factor and mutual information as threshold in BICM scheme. Lastly, an experimental study of the limits of nonlinearity compensation with digital signal processing has been conducted.

  5. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes

  6. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed

  7. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shidi Yu

    2018-05-01

    Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that

  8. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  9. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  10. MIRANDA - a module based on multiregion resonance theory for generating cross sections within the AUS neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1985-12-01

    MIRANDA is the cross-section generation module of the AUS neutronics code system used to prepare multigroup cross-section data which are pertinent to a particular study from a general purpose multigroup library of cross sections. Libraries have been prepared from ENDF/B which are suitable for thermal and fast fission reactors and for fusion blanket studies. The libraries include temperature dependent data, resonance cross sections represented by subgroup parameters and may contain photon as well as neutron data. The MIRANDA module includes a multiregion resonance calculation in slab, cylinder or cluster geometry, a homogeneous B L flux solution, and a group condensation facility. This report documents the modifications to an earlier version of MIRANDA and provides a complete user's manual

  11. Adapting Canada's northern infrastructure to climate change: the role of codes and standards

    International Nuclear Information System (INIS)

    Steenhof, P.

    2009-01-01

    This report provides the results of a research project that investigated the use of codes and standards in terms of their potential for fostering adaptation to the future impacts of climate change on built infrastructure in Canada's north. This involved a literature review, undertaking key informant interviews, and a workshop where key stakeholders came together to dialogue on the challenges facing built infrastructure in the north as a result of climate change and the role of codes and standards to help mitigate climate change risk. In this article, attention is given to the topic area of climate data and information requirements related to climate and climate change. This was an important focal area that was identified through this broader research effort since adequate data is essential in allowing codes and standards to meet their ultimate policy objective. A number of priorities have been identified specific to data and information needs in the context of the research topic investigated: There is a need to include northerners in developing the climate and permafrost data required for codes and standards so that these reflect the unique geographical, economic, and cultural realities and variability of the north; Efforts should be undertaken to realign climate design values so that they reflect both present and future risks; There is a need for better information on the rate and extent of permafrost degradation in the north; and, There is a need to improve monitoring of the rate of climate change in the Arctic. (author)

  12. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology

    Science.gov (United States)

    Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  13. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  14. Adaptive Mesh Refinement in CTH

    International Nuclear Information System (INIS)

    Crawford, David

    1999-01-01

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems

  15. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  16. Development of a Burnup Module DECBURN Based on the Krylov Subspace Method

    Energy Technology Data Exchange (ETDEWEB)

    Cho, J. Y.; Kim, K. S.; Shim, H. J.; Song, J. S

    2008-05-15

    This report is to develop a burnup module DECBURN that is essential for the reactor analysis and the assembly homogenization codes to trace the fuel composition change during the core burnup. The developed burnup module solves the burnup equation by the matrix exponential method based on the Krylov Subspace method. The final solution of the matrix exponential is obtained by the matrix scaling and squaring method. To develop DECBURN module, this report includes the followings as: (1) Krylov Subspace Method for Burnup Equation, (2) Manufacturing of the DECBURN module, (3) Library Structure Setup and Library Manufacturing, (4) Examination of the DECBURN module, (5) Implementation to the DeCART code and Verification. DECBURN library includes the decay constants, one-group cross section and the fission yields. Examination of the DECBURN module is performed by manufacturing a driver program, and the results of the DECBURN module is compared with those of the ORIGEN program. Also, the implemented DECBURN module to the DeCART code is applied to the LWR depletion benchmark and a OPR-1000 pin cell problem, and the solutions are compared with the HELIOS code to verify the computational soundness and accuracy. In this process, the criticality calculation method and the predictor-corrector scheme are introduced to the DeCART code for a function of the homogenization code. The examination by a driver program shows that the DECBURN module produces exactly the same solution with the ORIGEN program. DeCART code that equips the DECBURN module produces a compatible solution to the other codes for the LWR depletion benchmark. Also the multiplication factors of the DeCART code for the OPR-1000 pin cell problem agree to the HELIOS code within 100 pcm over the whole burnup steps. The multiplication factors with the criticality calculation are also compatible with the HELIOS code. These results mean that the developed DECBURN module works soundly and produces an accurate solution

  17. PASC-1, Petten AMPX-II/SCALE-3 Code System for Reactor Neutronics Calculation

    International Nuclear Information System (INIS)

    Yaoqing, W.; Oppe, J.; Haas, J.B.M. de; Gruppelaar, H.; Slobben, J.

    1995-01-01

    1 - Description of program or function: The Petten AMPX-II/SCALE-3 Code System PASC-1 is a reactor neutronics calculation programme system consisting of well known IBM-oriented codes, that have been translated into FORTRAN-77, for calculations on a CDC-CYBER computer. Thus, the portability of these codes has been increased. In this system, some AMPX-II and SCALE-3 modules, the one-dimensional transport code ANISN and the 1 to 3-dimensional diffusion code CITATION are linked together on the CDC-CYBER/855 computer. The new cell code XSDRNPM-S and the old XSDRN code are included in the system. Starting from an AMPX fine group library up to CITATION, calculations can be performed for each individual module. Existing AMPX master interface format libraries, such as CSRL-IV, JEF-1, IRI and SCALE-45, and the old XSDRN-formatted libraries such as the COBB library can be used for the calculations. The code system contains the following modules and codes at present: AIM, AJAX, MALOCS, NITAWL-S, REVERT-I, ICE-2, CONVERT, JUAN, OCTAGN, XSDRNPM-S, XSDRN, ANISN and CITATION. The system will be extended with other SCALE modules and transport codes. 2 - Method of solution: The PASC-1 system is based on AMPX-II/SCALE-3 modules. Except for some SCALE-3 modules taken from the SCALIAS package, the original AMPX-II modules were IBM versions written in FORTRAN IV. These modules have been translated into CDC FORTRAN V. In order to test these modules and link them with some codes, some of the sample problem calculations have been performed for the whole PASC-1 system. During these calculations, some FORTRAN-77 errors were found in MALOCS, REVERT, CONVERT and some subroutines of SUBLIB (FORTRAN-77 subroutine library). These errors have been corrected. Because many corrections were made for the REVERT module, it is renamed as REVERT-I (improved version of REVERT). After these corrections, the whole system is running on a CDC-CYBER Computer (NOS-BE operating system). 3 - Restrictions on the

  18. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  19. Fuel rod modelling during transients: The TOUTATIS code

    International Nuclear Information System (INIS)

    Bentejac, F.; Bourreau, S.; Brochard, J.; Hourdequin, N.; Lansiart, S.

    2001-01-01

    The TOUTATIS code is devoted to the PCI local phenomena simulation, in correlation with the METEOR code for the global behaviour of the fuel rod. More specifically, the TOUTATIS objective is to evaluate the mechanical constraints on the cladding during a power transient thus predicting its behaviour in term of stress corrosion cracking. Based upon the finite element computation code CASTEM 2000, TOUTATIS is a set of modules written in a macro language. The aim of this paper is to present both code modules: The axisymmetric bi-dimensional module, modeling a unique block pellet; The tri dimensional module modeling a radially fragmented pellet. Having shown the boundary conditions and the algorithms used, the application will be illustrated by: A short presentation of the bidimensional axisymmetric modeling performances as well as its limits; The enhancement due to the three dimensional modeling will be displayed by sensitivity studies to the geometry, in this case the pellet height/diameter ratio. Finally, we will show the easiness of the development inherent to the CASTEM 2000 system by depicting the process of a modeling enhancement by adding the possibility of an axial (horizontal) fissuration of the pellet. As conclusion, the future improvements planned for the code are depicted. (author)

  20. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  1. A Novel Adaptive Modulation Based on Nondata-Aided Error Vector Magnitude in Non-Line-Of-Sight Condition of Wireless Sensor Network.

    Science.gov (United States)

    Yang, Fan; Zeng, Xiaoping; Mao, Haiwei; Jian, Xin; Tan, Xiaoheng; Du, Derong

    2018-01-15

    The high demand for multimedia applications in environmental monitoring, invasion detection, and disaster aid has led to the rise of wireless sensor network (WSN). With the increase of reliability and diversity of information streams, the higher requirements on throughput and quality of service (QoS) have been put forward in data transmission between two sensor nodes. However, lower spectral efficiency becomes a bottleneck in non-line-of-sight (NLOS) transmission of WSN. This paper proposes a novel nondata-aided error vector magnitude based adaptive modulation (NDA-EVM-AM) to solve the problem. NDA-EVM is considered as a new metric to evaluate the quality of NLOS link for adaptive modulation in WSN. By modeling the NLOS scenario as the η - μ fading channel, a closed-form expression for the NDA-EVM of multilevel quadrature amplitude modulation (MQAM) signals over the η - μ fading channel is derived, and the relationship between SER and NDA-EVM is also formulated. Based on these results, NDA-EVM state machine is designed for adaptation strategy. The algorithmic complexity of NDA-EVM-AM is analyzed and the outage capacity of NDA-EVM-AM in an NLOS scenario is also given. The performances of NDA-EVM-AM are compared by simulation, and the results show that NDA-EVM-AM is an effective technique to be used in the NLOS scenarios of WSN. This technique can accurately reflect the channel variations and efficiently adjust modulation order to better match the channel conditions, hence, obtaining better performance in average spectral efficiency.

  2. Performance enhanced DDO-OFDM system with adaptively partitioned precoding and single sideband modulation.

    Science.gov (United States)

    Chen, Xi; Feng, Zhenhua; Tang, Ming; Fu, Songnian; Liu, Deming

    2017-09-18

    As a promising solution for short-to-medium transmission systems, direct detection optical orthogonal frequency division multiplexing (DDO-OFDM) or discrete multi-tone (DMT) has been intensively investigated in last decade. Benefitting from the advantages of peak-to-average power (PAPR) reduction and signal-to-noise ratio (SNR) equalization, precoding techniques are widely applied to enhance the performance of DDO-OFDM systems. However, the conventional method of partitioning precoding sets limits the ability of precoding schemes to optimize the SNR variation and the allocation of modulation formats. Thus, the precoding transmission systems are hard to reach the capacity that traditional bit-power loading (BPL) techniques, like the Levin-Campello (LC) algorithm, can achieve. In this paper, we investigate the principle of SNR variation for precoded DDO-OFDM systems and theoretically demonstrate that the SNR equalization effect of precoding techniques is actually determined by the noise equalization process. Based on this fact, we propose an adaptively partitioned precoding (APP) algorithm to unlock the ability to control the SNR of each subcarrier. As demonstrated by the simulation and experimental results, the proposed APP algorithm achieves the transmission capacity as high as the LC algorithm and has nearly 1 dB PAPR reduction. Besides, the look-up table (LUT) operation ensures low complexity of the proposed APP algorithm compared with LC algorithm. To avoid severe chromatic dispersion (CD) induced spectral fading, single sideband (SSB) modulation is also implemented. We find that SSB modulation can reach the capacity of double sideband (DSB) modulation in optical back-to-back (OB2B) configuration by optimizing the modulation index. Therefore, the APP based SSB-DDO-OFDM scheme can sufficiently enhance the performance of cost-sensitive short-to-medium reach optical fiber communication systems.

  3. The APOLLO assembly spectrum code

    International Nuclear Information System (INIS)

    Kavenoky, A.; Sanchez, R.

    1987-04-01

    The APOLLO code was originally developed as a design tool for HTR's, later it was aimed at the calculation of PWR lattices. APOLLO is a general purpose assembly spectrum code based on the multigroup integral transport equation; refined collision probability modules allow the computation of 1D geometries with linearly anisotropic scattering and two term flux expansion. In 2D geometries modules based on the substructure method provide fast and accurate design calculations and a module based on a direct discretization is devoted to reference calculations. The SPH homogenization technique provides corrected cross sections performing an equivalence between coarse and refined calculations. The post processing module of APOLLO generate either APOLLIB to be used by APOLLO or NEPLIB for reactor diffusion calculation. The cross section library of APOLLO contains data and self-shielding data for more than 400 isotopes. APOLLO is able to compute the depletion of any medium accounting for any heavy isotope or fission product chain. 21 refs

  4. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  5. Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load

    Science.gov (United States)

    Burke, C. J.; Seifritz, E.; Tobler, P. N.

    2017-01-01

    Abstract Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain’s capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations. PMID:28462394

  6. Capacity-Approaching Superposition Coding for Optical Fiber Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-01-01

    We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....

  7. Development of Lower Plenum Molten Pool Module of Severe Accident Analysis Code in Korea

    International Nuclear Information System (INIS)

    Son, Donggun; Kim, Dong-Ha; Park, Rae-Jun; Bae, Jun-Ho; Shim, Suk-Ku; Marigomen, Ralph

    2014-01-01

    To simulate a severe accident progression of nuclear power plant and forecast reactor pressure vessel failure, we develop computational software called COMPASS (COre Meltdown Progression Accident Simulation Software) for whole physical phenomena inside the reactor pressure vessel from a core heat-up to a vessel failure. As a part of COMPASS project, in the first phase of COMPASS development (2011 - 2014), we focused on the molten pool behavior in the lower plenum, heat-up and ablation of reactor vessel wall. Input from the core module of COMPASS is relocated melt composition and mass in time. Molten pool behavior is described based on the lumped parameter model. Heat transfers in between oxidic, metallic molten pools, overlying water, steam and debris bed are considered in the present study. The models and correlations used in this study are appropriately selected by the physical conditions of severe accident progression. Interaction between molten pools and reactor vessel wall is also simulated based on the lumped parameter model. Heat transfers between oxidic pool, thin crust of oxidic pool and reactor vessel wall are considered and we solve simple energy balance equations for the crust thickness of oxidic pool and reactor vessel wall. As a result, we simulate a benchmark calculation for APR1400 nuclear power plant, with assumption of relocated mass from the core is constant in time such that 0.2ton/sec. We discuss about the molten pool behavior and wall ablation, to validate our models and correlations used in the COMPASS. Stand-alone SIMPLE program is developed as the lower plenum molten pool module for the COMPASS in-vessel severe accident analysis code. SIMPLE program formulates the mass and energy balance for water, steam, particulate debris bed, molten corium pools and oxidic crust from the first principle and uses models and correlations as the constitutive relations for the governing equations. Limited steam table and the material properties are provided

  8. Tunable modulation of refracted lamb wave front facilitated by adaptive elastic metasurfaces

    Science.gov (United States)

    Li, Shilong; Xu, Jiawen; Tang, J.

    2018-01-01

    This letter reports designs of adaptive metasurfaces capable of modulating incoming wave fronts of elastic waves through electromechanical-tuning of their cells. The proposed elastic metasurfaces are composed of arrayed piezoelectric units with individually connected negative capacitance elements that are online tunable. By adjusting the negative capacitances properly, accurately formed, discontinuous phase profiles along the elastic metasurfaces can be achieved. Subsequently, anomalous refraction with various angles can be realized on the transmitted lowest asymmetric mode Lamb wave. Moreover, designs to facilitate planar focal lenses and source illusion devices can also be accomplished. The proposed flexible and versatile strategy to manipulate elastic waves has potential applications ranging from structural fault detection to vibration/noise control.

  9. Performance analysis of two-way amplify and forward relaying with adaptive modulation

    KAUST Repository

    Hwang, Kyusung

    2009-09-01

    In this paper, we study two-way amplify-and-forward relaying in conjunction with adaptive modulation over a multiple relay network. In order to keep the diversity order equal to the number of relays and maintain a low complexity, we consider the best relay selection scheme in this work. Based on the proposed selection criterion for the best relay, we analyze the average spectral efficiency by its approximated upper bound. In addition, we extend the proposed scheme to the case where a direct path between source and destination exists. Our numerical examples show that the proposed system offers a considerable gain in the spectral efficiency while satisfying the error rates requirements. ©2009 IEEE.

  10. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    Science.gov (United States)

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  11. Adaptation and perceptual norms

    Science.gov (United States)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  12. CENTAR code for extended nonlinear transient analysis of extraterrestrial reactor systems

    International Nuclear Information System (INIS)

    Nassersharif, B.; Peer, J.S.; DeHart, M.D.

    1987-01-01

    Current interest in the application of nuclear reactor-driven power systems to space missions has generated a need for a systems simulation code to model and analyze space reactor systems; such a code has been initiated at Texas A and M, and the first version is nearing completion; release was anticipated in the fall of 1987. This code, named CENTAR (Code for Extended Nonlinear Transient Analysis of Extraterrestrial Reactor Systems), is designed specifically for space systems and is highly vectorizable. CENTAR is composed of several specialized modules. A fluids module is used to model fluid behavior throughout the system. A wall heat transfer module models the heat transfer characteristics of all walls, insulation, and structure around the system. A fuel element thermal analysis module is used to predict the temperature behavior and heat transfer characteristics of the reactor fuel rods. A kinetics module uses a six-group point kinetics formulation to model reactivity feedback and control and the ANS 5.1 decay-heat curve to model shutdown decay-heat production. A pump module models the behavior of thermoelectric-electromagnetic pumps, and a heat exchanger module models not only thermal effects in thermoelectric heat exchangers, but also predicts electrical power production for a given configuration. Finally, an accumulator module models coolant expansion/contraction accumulators

  13. Adaptation Mechanism of the Aspartate Receptor: Electrostatics of the Adaptation Subdomain Play a Key Role in Modulating Kinase Activity†

    Science.gov (United States)

    Starrett, Diane J.; Falke, Joseph J.

    2010-01-01

    The aspartate receptor of the Escherichia coli and Salmonella typhimurium chemotaxis pathway generates a transmembrane signal that regulates the activity of the cytoplasmic kinase CheA. Previous studies have identified a region of the cytoplasmic domain that is critical to receptor adaptation and kinase regulation. This region, termed the adaptation subdomain, contains a high density of acidic residues, including specific glutamate residues that serve as receptor adaptation sites. However, the mechanism of signal propagation through this region remains poorly understood. This study uses site-directed mutagenesis to neutralize each acidic residue within the subdomain to probe the hypothesis that electrostatics in this region play a significant role in the mechanism of kinase activation and modulation. Each point mutant was tested for its ability to regulate chemotaxis in vivo and kinase activity in vitro. Four point mutants (D273N, E281Q, D288N, and E477Q) were found to superactivate the kinase relative to the wild-type receptor, and all four of these kinase-activating substitutions are located along the same intersubunit interface as the adaptation sites. These activating substitutions retained the wild-type ability of the attractant-occupied receptor to inhibit kinase activity. When combined in a quadruple mutant (D273N/E281Q/D288N/E477Q), the four charge-neutralizing substitutions locked the receptor in a kinase-superactivating state that could not be fully inactivated by the attractant. Similar lock-on character was observed for a charge reversal substitution, D273R. Together, these results implicate the electrostatic interactions at the intersubunit interface as a major player in signal transduction and kinase regulation. The negative charge in this region destabilizes the local structure in a way that enhances conformational dynamics, as detected by disulfide trapping, and this effect is reversed by charge neutralization of the adaptation sites. Finally, two

  14. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  15. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  16. Endogenous adaptation to low oxygen modulates T-cell regulatory pathways in EAE.

    Science.gov (United States)

    Esen, Nilufer; Katyshev, Vladimir; Serkin, Zakhar; Katysheva, Svetlana; Dore-Duffy, Paula

    2016-01-19

    In the brain, chronic inflammatory activity may lead to compromised delivery of oxygen and glucose suggesting that therapeutic approaches aimed at restoring metabolic balance may be useful. In vivo exposure to chronic mild normobaric hypoxia (10 % oxygen) leads to a number of endogenous adaptations that includes vascular remodeling (angioplasticity). Angioplasticity promotes tissue survival. We have previously shown that induction of adaptive angioplasticity modulates the disease pattern in myelin oligodendrocyte glycoprotein (MOG)-induced experimental autoimmune encephalomyelitis (EAE). In the present study, we define mechanisms by which adaptation to low oxygen functionally ameliorates the signs and symptoms of EAE and for the first time show that tissue hypoxia may fundamentally alter neurodegenerative disease. C57BL/6 mice were immunized with MOG, and some of them were kept in the hypoxia chambers (day 0) and exposed to 10 % oxygen for 3 weeks, while the others were kept at normoxic environment. Sham-immunized controls were included in both hypoxic and normoxic groups. Animals were sacrificed at pre-clinical and peak disease periods for tissue collection and analysis. Exposure to mild hypoxia decreased histological evidence of inflammation. Decreased numbers of cluster of differentiation (CD)4+ T cells were found in the hypoxic spinal cords associated with a delayed Th17-specific cytokine response. Hypoxia-induced changes did not alter the sensitization of peripheral T cells to the MOG peptide. Exposure to mild hypoxia induced significant increases in anti-inflammatory IL-10 levels and an increase in the number of spinal cord CD25+FoxP3+ T-regulatory cells. Acclimatization to mild hypoxia incites a number of endogenous adaptations that induces an anti-inflammatory milieu. Further understanding of these mechanisms system may pinpoint possible new therapeutic targets to treat neurodegenerative disease.

  17. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  18. Divisible ℤ-modules

    Directory of Open Access Journals (Sweden)

    Futa Yuichi

    2016-03-01

    Full Text Available In this article, we formalize the definition of divisible ℤ-module and its properties in the Mizar system [3]. We formally prove that any non-trivial divisible ℤ-modules are not finitely-generated.We introduce a divisible ℤ-module, equivalent to a vector space of a torsion-free ℤ-module with a coefficient ring ℚ. ℤ-modules are important for lattice problems, LLL (Lenstra, Lenstra and Lovász base reduction algorithm [15], cryptographic systems with lattices [16] and coding theory [8].

  19. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  20. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  1. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  2. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  3. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  4. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  5. SU-E-J-254: Utility of Pinnacle Dynamic Planning Module Utilizing Deformable Image Registration in Adaptive Radiotherapy

    International Nuclear Information System (INIS)

    Jani, S

    2014-01-01

    Purpose For certain highly conformal treatment techniques, changes in patient anatomy due to weight loss and/or tumor shrinkage can result in significant changes in dose distribution. Recently, the Pinnacle treatment planning system added a Dynamic Planning module utilizing Deformable Image Registration (DIR). The objective of this study was to evaluate the effectiveness of this software in adapting to altered anatomy and adjusting treatment plans to account for it. Methods We simulated significant tumor response by changing patient thickness and altered chin positions using a commercially-available head and neck (H and N) phantom. In addition, we studied 23 CT image sets of fifteen (15) patients with H and N tumors and eight (8) patients with prostate cancer. In each case, we applied deformable image registration through Dynamic Planning module of our Pinnacle Treatment Planning System. The dose distribution of the original CT image set was compared to the newly computed dose without altering any treatment parameter. Result was a dose if we did not adjust the plan to reflect anatomical changes. Results For the H and N phantom, a tumor response of up to 3.5 cm was correctly deformed by the Pinnacle Dynamic module. Recomputed isodose contours on new anatomies were within 1 mm of the expected distribution. The Pinnacle system configuration allowed dose computations resulting from original plans on new anatomies without leaving the planning system. Original and new doses were available side-by-side with both CT image sets. Based on DIR, about 75% of H and N patients (11/15) required a re-plan using new anatomy. Among prostate patients, the DIR predicted near-correct bladder volume in 62% of the patients (5/8). Conclusions The Dynamic Planning module of the Pinnacle system proved to be an accurate and useful tool in our ability to adapt to changes in patient anatomy during a course of radiotherapy

  6. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  7. Brazilian cross-cultural adaptation of the DocCom online module: communication for teamwork

    Directory of Open Access Journals (Sweden)

    Tatiane Angélica Phelipini Borges

    2017-09-01

    Full Text Available ABSTRACT Objective: to carry out the cross-cultural adaptation of DocCom online module 38, which deals with teamwork communication into Portuguese for the Brazilian contexto. Method: the transcultural translation and adaptation were accomplished through initial translations, synthesis of the translations, evaluation and synthesis by a committee of experts, analysis by translators and back translation, pre-test with nurses and undergraduate students in Nursing, and analysis of the translators to obtain the final material. Results: in evaluation and synthesis of the translated version with the original version by the expert committee, the items obtained higher than 80% agreement. Few modifications were suggested according to the analysis by pretest participants. The final version was adequate to the proposed context and its purpose. Conclusion: it is believed that by making this new teaching-learning strategy of communication skills and competencies for teamwork available, it can be used systematically in undergraduate and postgraduate courses in the health area in Brazil in order to contribute to training professionals, and also towards making advances in this field.

  8. Coupling calculation of CFD-ACE computational fluid dynamics code and DeCART whole-core neutron transport code for development of numerical reactor

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Chang Hwan; Seo, Kyong Won; Chun, Tae Hyun; Kim, Kang Seog

    2005-03-15

    Code coupling activities have so far focused on coupling the neutronics modules with the CFD module. An interface module for the CFD-ACE/DeCART coupling was established as an alternative to the original STAR-CD/DeCART interface. The interface module for DeCART/CFD-ACE was validated by single-pin model. The optimized CFD mesh was decided through the calculation of multi-pin model. It was important to consider turbulent mixing of subchannels for calculation of fuel temperature. For the parallel calculation, the optimized decompose process was necessary to reduce the calculation costs and setting of the iteration and convergence criterion for each code was important, too.

  9. Coupling calculation of CFD-ACE computational fluid dynamics code and DeCART whole-core neutron transport code for development of numerical reactor

    International Nuclear Information System (INIS)

    Shin, Chang Hwan; Seo, Kyong Won; Chun, Tae Hyun; Kim, Kang Seog

    2005-03-01

    Code coupling activities have so far focused on coupling the neutronics modules with the CFD module. An interface module for the CFD-ACE/DeCART coupling was established as an alternative to the original STAR-CD/DeCART interface. The interface module for DeCART/CFD-ACE was validated by single-pin model. The optimized CFD mesh was decided through the calculation of multi-pin model. It was important to consider turbulent mixing of subchannels for calculation of fuel temperature. For the parallel calculation, the optimized decompose process was necessary to reduce the calculation costs and setting of the iteration and convergence criterion for each code was important, too

  10. Fission-product release modelling in the ASTEC integral code: the status of the ELSA module

    International Nuclear Information System (INIS)

    Plumecocq, W.; Kissane, M.P.; Manenc, H.; Giordano, P.

    2003-01-01

    Safety assessment of water-cooled nuclear reactors encompasses potential severe accidents where, in particular, the release of fission products (FPs) and actinides into the reactor coolant system (RCS) is evaluated. The ELSA module is used in the ASTEC integral code to model all releases into the RCS. A wide variety of experiments is used for validation: small-scale CRL, ORNL and VERCORS tests; large-scale Phebus-FP tests; etc. Being a tool that covers intact fuel and degraded states, ELSA is being improved maximizing the use of information from degradation modelling. Short-term improvements will include some treatment of initial FP release due to intergranular inventories and implementing models for release of additional structural materials (Sn, Fe, etc.). (author)

  11. Visual encoding of a QR code using a Gaussian modulating function%利用高斯调变函数视觉编码QR码

    Institute of Scientific and Technical Information of China (English)

    郭兴华; 朱小刚

    2017-01-01

    .Visually appealing codes incorporate high-level visual features,such as colors,letters,illustrations,or logos.Researchers have attempted to endow the QR code with aesthetic elements,and QR code beautification has been formulated as an optimization problem that minimizes visual perception distortion subject to an acceptable decoding rate.However,the visual quality of the QR code generated by existing methods still requires improvement.The key challenge is the lack of proper understanding or analytical formulations capturing the stability of QR codes under variations in lighting,camera specifications,and even perturbations to the QR codes.Patented and ill-documented algorithms employed to read QR codes cause further difficulties.Consequently,existing approaches are mostly ad hoc and often favor readability at the cost of reduced visual quality.Method This work presents an algorithm that visually encodes a QR code by synthesizing the conventional QR code with a theme image.This task is fulfilled by dividing the theme image into equal-sized non-overlapping blocks and modifying the average luminance of each block to its corresponding module type in the QR code by applying the well-designed Gaussian modulating function.In the Gaussian modulating function,standard deviation is dynamically determined according to the smoothness of the corresponding module block.The brightness of the central region of the modified module gradually changes along the circular and presents a smooth appearance and different sizes,which make it consistent with the human visual system.In addition,the size of the module's brightness-sensing region can be adjusted according to application scenarios and the sensitivity of the human eye to different noises.Result In the experimental stage,visually meaningful QR codes are synthesized by setting different parameters,and their correct decoding rate is tested.The optimal parameters are determined to ensure decoding reliability and make the QR code easily recognizable for

  12. An efficient approach for electric load forecasting using distributed ART (adaptive resonance theory) and HS-ARTMAP (Hyper-spherical ARTMAP network) neural network

    International Nuclear Information System (INIS)

    Cai, Yuan; Wang, Jian-zhou; Tang, Yun; Yang, Yu-chen

    2011-01-01

    This paper presents a neural network based on adaptive resonance theory, named distributed ART (adaptive resonance theory) and HS-ARTMAP (Hyper-spherical ARTMAP network), applied to the electric load forecasting problem. The distributed ART combines the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multi-layer perceptions. The HS-ARTMAP, a hybrid of an RBF (Radial Basis Function)-network-like module which uses hyper-sphere basis function substitute the Gaussian basis function and an ART-like module, performs incremental learning capabilities in function approximation problem. The HS-ARTMAP only receives the compressed distributed coding processed by distributed ART to deal with the proliferation problem which ARTMAP (adaptive resonance theory map) architecture often encounters and still performs well in electric load forecasting. To demonstrate the performance of the methodology, data from New South Wales and Victoria in Australia are illustrated. Results show that the developed method is much better than the traditional BP and single HS-ARTMAP neural network. -- Research highlights: → The processing of the presented network is based on compressed distributed data. It's an innovation among the adaptive resonance theory architecture. → The presented network decreases the proliferation the Fuzzy ARTMAP architectures usually encounter. → The network on-line forecasts electrical load accurately, stably. → Both one-period and multi-period load forecasting are executed using data of different cities.

  13. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  14. Analysis and Design of Adaptive OCDMA Passive Optical Networks

    Science.gov (United States)

    Hadi, Mohammad; Pakravan, Mohammad Reza

    2017-07-01

    OCDMA systems can support multiple classes of service by differentiating code parameters, power level and diversity order. In this paper, we analyze BER performance of a multi-class 1D/2D OCDMA system and propose a new approximation method that can be used to generate accurate estimation of system BER using a simple mathematical form. The proposed approximation provides insight into proper system level analysis, system level design and sensitivity of system performance to the factors such as code parameters, power level and diversity order. Considering code design, code cardinality and system performance constraints, two design problems are defined and their optimal solutions are provided. We then propose an adaptive OCDMA-PON that adaptively shares unused resources of inactive users among active ones to improve upstream system performance. Using the approximated BER expression and defined design problems, two adaptive code allocation algorithms for the adaptive OCDMA-PON are presented and their performances are evaluated by simulation. Simulation results show that the adaptive code allocation algorithms can increase average transmission rate or decrease average optical power consumption of ONUs for dynamic traffic patterns. According to the simulation results, for an adaptive OCDMA-PON with BER value of 1e-7 and user activity probability of 0.5, transmission rate (optical power consumption) can be increased (decreased) by a factor of 2.25 (0.27) compared to fixed code assignment.

  15. Concept of adaptability in space modules.

    Science.gov (United States)

    Cooper, M

    1990-10-01

    The space program is aiming towards the permanent use of space; to build and establish an orbital space station, a Moon base and depart to Mars and beyond. We must look after the total independency from the Earth's natural resources and work in the design of a modular space base in which each module is capable of duplicating one natural process, and that all these modules in combination take us to conceive a space base capable of sustaining life. Every area of human knowledge must be involved. This modular concept will let us see other space goals as extensions of the primary project. The basic technology has to be defined, then relatively minor adjustments will let us reach new objectives such as a first approach for a lunar base and for a Mars manned mission. This concept aims towards an open technology in which standards and recommendations will be created to assemble huge space bases and spaceships from specific modules that perform certain functions, that in combination will let us reach the status of permanent use and exploration of space.

  16. Coupled geochemical and solute transport code development

    International Nuclear Information System (INIS)

    Morrey, J.R.; Hostetler, C.J.

    1985-01-01

    A number of coupled geochemical hydrologic codes have been reported in the literature. Some of these codes have directly coupled the source-sink term to the solute transport equation. The current consensus seems to be that directly coupling hydrologic transport and chemical models through a series of interdependent differential equations is not feasible for multicomponent problems with complex geochemical processes (e.g., precipitation/dissolution reactions). A two-step process appears to be the required method of coupling codes for problems where a large suite of chemical reactions must be monitored. Two-step structure requires that the source-sink term in the transport equation is supplied by a geochemical code rather than by an analytical expression. We have developed a one-dimensional two-step coupled model designed to calculate relatively complex geochemical equilibria (CTM1D). Our geochemical module implements a Newton-Raphson algorithm to solve heterogeneous geochemical equilibria, involving up to 40 chemical components and 400 aqueous species. The geochemical module was designed to be efficient and compact. A revised version of the MINTEQ Code is used as a parent geochemical code

  17. An Adaptive Coding Scheme For Effective Bandwidth And Power ...

    African Journals Online (AJOL)

    Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.

  18. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  19. (Nearly) portable PIC code for parallel computers

    International Nuclear Information System (INIS)

    Decyk, V.K.

    1993-01-01

    As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes

  20. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 1: Inventory, Release, and Transport Modules

    International Nuclear Information System (INIS)

    Eslinger, Paul W.; Engel, David W.; Gerhardstein, Lawrence H.; Lopresti, Charles A.; Nichols, William E.; Strenge, Dennis L.

    2001-12-01

    One activity of the Department of Energy's Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that handle inventory tracking, release of contaminants to the environment, and transport of contaminants through the unsaturated zone, saturated zone, and the Columbia River

  1. A dosimetric comparison of two-phase adaptive intensity-modulated radiotherapy for locally advanced nasopharyngeal cancer

    International Nuclear Information System (INIS)

    Chitapanarux, Imjai; Chomprasert, Kittisak; Nobnaop, Wannapa; Wanwilairat, Somsak; Tharavichitkul, Ekasit; Jakrabhandu, Somvilai; Onchan, Wimrak; Patrinee, Traisathit; Gestel, Dirk Van

    2015-01-01

    The purpose of this investigation was to evaluate the potential dosimetric benefits of a two-phase adaptive intensity-modulated radiotherapy (IMRT) protocol for patients with locally advanced nasopharyngeal cancer (NPC). A total of 17 patients with locally advanced NPC treated with IMRT had a second computed tomography (CT) scan after 17 fractions in order to apply and continue the treatment with an adapted plan after 20 fractions. To simulate the situation without adaptation, a hybrid plan was generated by applying the optimization parameters of the original treatment plan to the anatomy of the second CT scan. The dose-volume histograms (DVHs) and dose statistics of the hybrid plan and the adapted plan were compared. The mean volume of the ipsilateral and contralateral parotid gland decreased by 6.1 cm 3 (30.5%) and 5.4 cm 3 (24.3%), respectively. Compared with the hybrid plan, the adapted plan provided a higher dose to the target volumes with better homogeneity, and a lower dose to the organs at risk (OARs). The Dmin of all planning target volumes (PTVs) increased. The Dmax of the spinal cord and brainstem were lower in 94% of the patients (1.6-5.9 Gy, P < 0.001 and 2.1-9.9 Gy, P < 0.001, respectively). The D mean of the contralateral parotid decreased in 70% of the patients (range, 0.2-4.4 Gy). We could not find a relationship between dose variability and weight loss. Our two-phase adaptive IMRT protocol improves dosimetric results in terms of target volumes and OARs in patients with locally advanced NPC. (author)

  2. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    Science.gov (United States)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  3. Time domain spectral phase encoding/DPSK data modulation using single phase modulator for OCDMA application.

    Science.gov (United States)

    Wang, Xu; Gao, Zhensen; Kataoka, Nobuyuki; Wada, Naoya

    2010-05-10

    A novel scheme using single phase modulator for simultaneous time domain spectral phase encoding (SPE) signal generation and DPSK data modulation is proposed and experimentally demonstrated. Array- Waveguide-Grating and Variable-Bandwidth-Spectrum-Shaper based devices can be used for decoding the signal directly in spectral domain. The effects of fiber dispersion, light pulse width and timing error on the coding performance have been investigated by simulation and verified in experiment. In the experiment, SPE signal with 8-chip, 20GHz/chip optical code patterns has been generated and modulated with 2.5 Gbps DPSK data using single modulator. Transmission of the 2.5 Gbps data over 34km fiber with BEROCDMA) and secure optical communication applications. (c) 2010 Optical Society of America.

  4. Cultural adaptation and validation of the Freiburg Life Quality Assessment - Wound Module to Brazilian Portuguese

    Directory of Open Access Journals (Sweden)

    Elaine Aparecida Rocha Domingues

    2016-01-01

    Full Text Available Objectives: to adapt the Freiburg Life Quality Assessment - Wound Module to Brazilian Portuguese and to measure its psychometric properties: reliability and validity. Method: the cultural adaptation was undertaken following the stages of translation, synthesis of the translations, back translation, committee of specialists, pre-test and focus group. A total of 200 patients participated in the study. These were recruited in Primary Care Centers, Family Health Strategy Centers, in a philanthropic hospital and in a teaching hospital. Reliability was assessed through internal consistency and stability. Validity was ascertained through the correlation of the instrument's values with those of the domains of the Ferrans and Powers Quality of Life Index - Wound Version and with the quality of life score of the visual analog scale. Results: the instrument presented adequate internal consistency (Cronbach alpha =0.86 and high stability in the test and retest (0.93. The validity presented correlations of moderate and significant magnitude (-0.24 to -0.48, p<0.0001. Conclusion: the results indicated that the adapted version presented reliable and valid psychometric measurements for the population with chronic wounds in the Brazilian culture.

  5. A static analysis tool set for assembler code verification

    International Nuclear Information System (INIS)

    Dhodapkar, S.D.; Bhattacharjee, A.K.; Sen, Gopa

    1991-01-01

    Software Verification and Validation (V and V) is an important step in assuring reliability and quality of the software. The verification of program source code forms an important part of the overall V and V activity. The static analysis tools described here are useful in verification of assembler code. The tool set consists of static analysers for Intel 8086 and Motorola 68000 assembly language programs. The analysers examine the program source code and generate information about control flow within the program modules, unreachable code, well-formation of modules, call dependency between modules etc. The analysis of loops detects unstructured loops and syntactically infinite loops. Software metrics relating to size and structural complexity are also computed. This report describes the salient features of the design, implementation and the user interface of the tool set. The outputs generated by the analyser are explained using examples taken from some projects analysed by this tool set. (author). 7 refs., 17 figs

  6. APOLLO 10 ASTRONAUT ENTERS LUNAR MODULE SIMULATOR

    Science.gov (United States)

    1969-01-01

    Apollo 10 lunar module pilot Eugene A. Cernan prepares to enter the lunar module simulator at the Flight Crew Training Building at the NASA Spaceport. Cernan, Apollo 10 commander Thomas P. Stafford and John W. Young, command module pilot, are to be launched May 18 on the Apollo 10 mission, a dress rehearsal for a lunar landing later this summer. Cernan and Stafford are to detach the lunar module and drop to within 10 miles of the moon's surface before rejoining Young in the command/service module. Looking on as Cernan puts on his soft helmet is Snoopy, the lovable cartoon mutt whose name will be the lunar module code name during the Apollo 10 flight. The command/service module is to bear the code name Charlie Brown.

  7. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    Science.gov (United States)

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  8. Decoding Different Patterns in Various Grey Tones Incorporated in the QR Code

    Directory of Open Access Journals (Sweden)

    Filip Cvitić

    2014-07-01

    Full Text Available Using colors in bar codes causes errors that may adversely affect their readability (Tan etal. 2010, given that the contrast between data and background modules is reduced. Due to the unreliability of using color bar codes, most designers still keep to the limitations placed by Pira International (Smithers Pira in 2002 (Williams, 2004. Since the contrast between data modules and background modules is the most important aspect in the process of reliable bar code decoding, this paper explores the dependence of reliable decoding of QR codes incorporated with combinations of grey tones on the technical characteristics of the cameras on smartphones that were marketed in the period between 2008 and 2012.

  9. The MICHELLE 2D/3D ES PIC Code Advances and Applications

    CERN Document Server

    Petillo, John; De Ford, John F; Dionne, Norman J; Eppley, Kenneth; Held, Ben; Levush, Baruch; Nelson, Eric M; Panagos, Dimitrios; Zhai, Xiaoling

    2005-01-01

    MICHELLE is a new 2D/3D steady-state and time-domain particle-in-cell (PIC) code* that employs electrostatic and now magnetostatic finite-element field solvers. The code has been used to design and analyze a wide variety of devices that includes multistage depressed collectors, gridded guns, multibeam guns, annular-beam guns, sheet-beam guns, beam-transport sections, and ion thrusters. Latest additions to the MICHELLE/Voyager tool are as follows: 1) a prototype 3D self magnetic field solver using the curl-curl finite-element formulation for the magnetic vector potential, employing edge basis functions and accumulating current with MICHELLE's new unstructured grid particle tracker, 2) the electrostatic field solver now accommodates dielectric media, 3) periodic boundary conditions are now functional on all grids, not just structured grids, 4) the addition of a global optimization module to the user interface where both electrical parameters (such as electrode voltages)can be optimized, and 5) adaptive mesh ref...

  10. Development of integrated computer code for analysis of risk reduction strategy

    International Nuclear Information System (INIS)

    Kim, Dong Ha; Kim, See Darl; Kim, Hee Dong

    2002-05-01

    The development of the MIDAS/TH integrated severe accident code was performed in three main areas: 1) addition of new models derived from the national experimental programs and models for APR-1400 Korea next generation reactor, 2) improvement of the existing models using the recently available results, and 3) code restructuring for user friendliness. The unique MIDAS/TH models include: 1) a kinetics module for core power calculation during ATWS, 2) a gap cooling module between the molten corium pool and the reactor vessel wall, 3) a penetration tube failure module, 4) a PAR analysis module, and 5) a look-up table for the pressure and dynamic load during steam explosion. The improved models include: 1) a debris dispersal module considering the cavity geometry during DCH, 2) hydrogen burn and deflagration-to-detonation transition criteria, 3) a peak pressure estimation module for hydrogen detonation, and 4) the heat transfer module between the molten corium pool and the overlying water. The sparger and the ex-vessel heat transfer module were assessed. To enhance user friendliness, code restructuring was performed. In addition, a sample of severe accident analysis results was organized under the preliminary database structure

  11. Water System Adaptation To Hydrological Changes: Module 7, Adaptation Principles and Considerations

    Science.gov (United States)

    This course will introduce students to the fundamental principles of water system adaptation to hydrological changes, with emphasis on data analysis and interpretation, technical planning, and computational modeling. Starting with real-world scenarios and adaptation needs, the co...

  12. Deciphering the BAR code of membrane modulators.

    Science.gov (United States)

    Salzer, Ulrich; Kostan, Julius; Djinović-Carugo, Kristina

    2017-07-01

    The BAR domain is the eponymous domain of the "BAR-domain protein superfamily", a large and diverse set of mostly multi-domain proteins that play eminent roles at the membrane cytoskeleton interface. BAR domain homodimers are the functional units that peripherally associate with lipid membranes and are involved in membrane sculpting activities. Differences in their intrinsic curvatures and lipid-binding properties account for a large variety in membrane modulating properties. Membrane activities of BAR domains are further modified and regulated by intramolecular or inter-subunit domains, by intermolecular protein interactions, and by posttranslational modifications. Rather than providing detailed cell biological information on single members of this superfamily, this review focuses on biochemical, biophysical, and structural aspects and on recent findings that paradigmatically promote our understanding of processes driven and modulated by BAR domains.

  13. Quality assurance aspects of the environmental code NECTAR

    International Nuclear Information System (INIS)

    Macdonald, H.F.; Nair, S.; Mascall, R.A.

    1986-02-01

    This report describes the quality assurance (QA) procedures which have been adopted in respect of the Environment code NECTAR (Nuclear Environmental Consequences, Transport of Activity and Risks). These procedures involve the verification, validation and evaluation of the individual NECTAR modules, namely RICE, SIRKIT, ATMOS, POPDOS and FOODWEB. The verification and validation of each module are considered in turn, while the final part of the report provides an overall evaluation of the code. The QA procedures are designed to provide reassurance that the NECTAR code is free from systematic errors and will perform calculations within the range of uncertainty and limitations claimed in its documentation. Following consideration of a draft version of this report by the Off-site Dose Methodology Working Group, the ATMOS, POPDOS and FOODWEB modules of NECTAR have been endorsed for use by the Board in reactor design and safety studies. (author)

  14. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  15. Sub-module Short Circuit Fault Diagnosis in Modular Multilevel Converter Based on Wavelet Transform and Adaptive Neuro Fuzzy Inference System

    DEFF Research Database (Denmark)

    Liu, Hui; Loh, Poh Chiang; Blaabjerg, Frede

    2015-01-01

    for continuous operation and post-fault maintenance. In this article, a fault diagnosis technique is proposed for the short circuit fault in a modular multi-level converter sub-module using the wavelet transform and adaptive neuro fuzzy inference system. The fault features are extracted from output phase voltage...

  16. Multiple Module Simulation of Water Cooled Breeding Blankets in K-DEMO Using Thermal-Hydraulic Analysis Code MARS-KS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Geon-Woo; Lee, Jeong-Hun; Park, Goon-Cherl; Cho, Hyoung-Kyu [Seoul National University, Seoul (Korea, Republic of); Im, Kihak [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    A preliminary concept for the Korean fusion demonstration reactor (K-DEMO) has been studied by the National Fusion Research Institute (NFRI) based on the National Fusion Roadmap of Korea. The feasibility studies have been performed in order to establish the conceptual design guidelines of the breeding blanket. As a part of the NFRI research, Seoul National University (SNU) is conducting thermal design, evaluation and validation of the water-cooled breeding blanket for the K-DEMO reactor. The purpose of this study is to extend the capability of MARS-KS to the overall blanket system analysis which includes 736 blanket modules in total. The strategy for the multi-module blanket system analysis using MARS-KS is introduced and the analysis result of the 46 blanket modules of single sector was summarized. A thermal-hydraulic analysis code for a nuclear reactor safety, MARS-KS, was applied for thermal analysis of the conceptual design of the K-DEMO breeding blanket. Then, a methodology to simulate multiple blanket modules was proposed, which uses a supervisor program to handle each blanket module individually at first and then distribute the flow rate considering the pressure drop that occurs in each module. For a feasibility test of the proposed methodology, 46 blankets in a sector, which are connected with each other through the common headers for the sector inlet and outlet, were simulated. The calculation results of flow rates, pressure drops, and temperatures showed the validity of the calculation. Because of parallelization using the MPI system, the computational time could be reduced significantly. In future, this methodology will be extended to an efficient simulation of multiple sectors, and further validation for transient simulation will be carried out for more practical applications.

  17. Index modulation for 5G wireless communications

    CERN Document Server

    Wen, Miaowen; Yang, Liuqing

    2017-01-01

    This book presents a thorough examination of index modulation, an emerging 5G modulation technique. It includes representative transmitter and receiver design, optimization, and performance analysis of index modulation in various domains. First, the basic spatial modulation system for the spatial domain is introduced. Then, the development of a generalized pre-coding aided quadrature spatial modulation system as well as a virtual spatial modulation system are presented. For the space-time domain, a range of differential spatial modulation systems are examined, along with the pre-coding design. Both basic and enhanced index modulated OFDM systems for the frequency domain are discussed, focusing on the verification of their strong capabilities in inter-carrier interference mitigation. Finally, key open problems are highlighted and future research directions are considered. Designed for researchers and professionals, this book is essential for anyone working in communications networking, 5G, and system design. A...

  18. New computational methods used in the lattice code DRAGON

    International Nuclear Information System (INIS)

    Marleau, G.; Hebert, A.; Roy, R.

    1992-01-01

    The lattice code DRAGON is used to perform transport calculations inside cells and assemblies for multidimensional geometry using the collision probability method, including the interface current and J ± techniques. Typical geometries that can be treated using this code include CANDU 2-dimensional clusters, CANDU 3-dimensional assemblies, pressurized water reactor (PWR) rectangular and hexagonal assemblies. It contains a self-shielding module for the treatment of microscopic cross section libraries and a depletion module for burnup calculations. DRAGON was written in a modular form in such a way as to accept easily new collision probability options and make them readily available to all the modules that require collision probability matrices like the self-shielding module, the flux solution module and the homogenization module. In this paper the authors present an overview of DRAGON and discuss some of the methods that were implemented in DRAGON in order to improve on its performance

  19. Inclusion of pressure and flow in the KITES MHD equilibrium code

    International Nuclear Information System (INIS)

    Raburn, Daniel; Fukuyama, Atsushi

    2013-01-01

    One of the simplest self-consistent models of a plasma is single-fluid magnetohydrodynamic (MHD) equilibrium with no bulk fluid flow under axisymmetry. However, both fluid flow and non-axisymmetric effects can significantly impact plasma equilibrium and confinement properties: in particular, fluid flow can produce profile pedestals, and non-axisymmetric effects can produce islands and stochastic regions. There exist a number of computational codes which are capable of calculating equilibria with arbitrary flow or with non-axisymmetric effects. Previously, a concept for a code to calculate MHD equilibria with flow in non-axisymmetric systems was presented, called the KITES (Kyoto ITerative Equilibrium Solver) code. Since then, many of the computational modules for the KITES code have been completed, and the work-in-progress KITES code has been used to calculate non-axisymmetric force-free equilibria. Additional computational modules are required to allow the KITES code to calculate equilibria with pressure and flow. Here, the authors report on the approaches used in developing these modules and provide a sample calculation with pressure. (author)

  20. Adaptation of the U.S. Food Security Survey Module for Low-Income Pregnant Latinas: Qualitative Phase.

    Science.gov (United States)

    Hromi-Fiedler, Amber; Bermúdez-Millán, Angela; Segura-Pérez, Sofia; Damio, Grace; Pérez-Escamilla, Rafael

    2009-01-01

    The objectives of this study were to: 1) assessed the face validity of the 18-items US Household Food Security Scale Module (US HFSSM) among low-income pregnant Latinas and 2) adapt the US HFSSM to the target population. This study was conducted in the United States in Hartford, Connecticut where 40% of residents are of Latina descent. Three focus groups (N=14(total)) were held with pregnant and postpartum Latinas from April - June 2004 to assess the understanding and applicability (face validity) of the US HFSSM as well as adapt the US HFSSM based on their recommendations. This was followed by pre-testing (N=7) to make final adaptations to the US HFSSM. Overall, the items in the US HFSSM were clear and understandable to participants, but some questions sounded repetitive to them. Participants felt the questions were applicable to other pregnant Latinas in their community and shared food security related experiences and strategies. Participants recommendations led to key adaptations to the US HFSSM including reducing the scale to 15-items, wording statements as questions, including two time periods, replacing the term "balanced meals" with "healthy and varied", replacing the term "low cost foods" with "cheap foods" and including a definition of the term, and including a coping mechanism of avoiding running out of food. The adapted US HFSSM was found to have good face validity among pregnant Latinas and can be used to assess food insecurity among this vulnerable population.

  1. Brazilian cross-cultural adaptation of the DocCom online module: communication for teamwork 1

    Science.gov (United States)

    Borges, Tatiane Angélica Phelipini; Vannuchi, Marli Terezinha Oliveira; Grosseman, Suely; González, Alberto Durán

    2017-01-01

    ABSTRACT Objective: to carry out the cross-cultural adaptation of DocCom online module 38, which deals with teamwork communication into Portuguese for the Brazilian contexto. Method: the transcultural translation and adaptation were accomplished through initial translations, synthesis of the translations, evaluation and synthesis by a committee of experts, analysis by translators and back translation, pre-test with nurses and undergraduate students in Nursing, and analysis of the translators to obtain the final material. Results: in evaluation and synthesis of the translated version with the original version by the expert committee, the items obtained higher than 80% agreement. Few modifications were suggested according to the analysis by pretest participants. The final version was adequate to the proposed context and its purpose. Conclusion: it is believed that by making this new teaching-learning strategy of communication skills and competencies for teamwork available, it can be used systematically in undergraduate and postgraduate courses in the health area in Brazil in order to contribute to training professionals, and also towards making advances in this field.

  2. SCAMPI: A code package for cross-section processing

    International Nuclear Information System (INIS)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-01-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis

  3. SCAMPI: A code package for cross-section processing

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  4. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  5. Zebra: An advanced PWR lattice code

    Energy Technology Data Exchange (ETDEWEB)

    Cao, L.; Wu, H.; Zheng, Y. [School of Nuclear Science and Technology, Xi' an Jiaotong Univ., No. 28, Xianning West Road, Xi' an, ShannXi, 710049 (China)

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precision and a high efficiency. (authors)

  6. Zebra: An advanced PWR lattice code

    International Nuclear Information System (INIS)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-01-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precision and a high efficiency. (authors)

  7. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  8. An NPARC Turbulence Module with Wall Functions

    Science.gov (United States)

    Zhu, J.; Shih, T.-H.

    1997-01-01

    The turbulence module recently developed for the NPARC code has been extended to include wall functions. The Van Driest transformation is used so that the wall functions can be applied to both incompressible and compressible flows. The module is equipped with three two-equation K-epsilon turbulence models: Chien, Shih-Lumley and CMOTR models. Details of the wall functions as well as their numerical implementation are reported. It is shown that the inappropriate artificial viscosity in the near-wall region has a big influence on the solution of the wall function approach. A simple way to eliminate this influence is proposed, which gives satisfactory results during the code validation. The module can be easily linked to the NPARC code for practical applications.

  9. Feature-based plan adaptation for fast treatment planning in scanned ion beam therapy

    International Nuclear Information System (INIS)

    Chen Wenjing; Gemmel, Alexander; Rietzel, Eike

    2013-01-01

    We propose a plan adaptation method for fast treatment plan generation in scanned ion beam therapy. Analysis of optimized treatment plans with carbon ions indicates that the particle number modulation of consecutive rasterspots in depth shows little variation throughout target volumes with convex shape. Thus, we extract a depth-modulation curve (DMC) from existing reference plans and adapt it for creation of new plans in similar treatment situations. The proposed method is tested with seven CT serials of prostate patients and three digital phantom datasets generated with the MATLAB code. Plans are generated with a treatment planning software developed by GSI using single-field uniform dose optimization for all the CT datasets to serve as reference plans and ‘gold standard’. The adapted plans are generated based on the DMC derived from the reference plans of the same patient (intra-patient), different patient (inter-patient) and phantoms (phantom-patient). They are compared with the reference plans and a re-positioning strategy. Generally, in 1 min on a standard PC, either a physical plan or a biological plan can be generated with the adaptive method provided that the new target contour is available. In all the cases, the V95 values of the adapted plans can achieve 97% for either physical or biological plans. V107 is always 0 indicating no overdosage, and target dose homogeneity is above 0.98 in all cases. The dose received by the organs at risk is comparable to the optimized plans. The plan adaptation method has the potential for on-line adaptation to deal with inter-fractional motion, as well as fast off-line treatment planning, with either the prescribed physical dose or the RBE-weighted dose. (paper)

  10. The WIMS familly of codes

    International Nuclear Information System (INIS)

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  11. Multiplexed Spike Coding and Adaptation in the Thalamus

    Directory of Open Access Journals (Sweden)

    Rebecca A. Mease

    2017-05-01

    Full Text Available High-frequency “burst” clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ “multiplexing” to differentially encode low- and high-frequency stimulus features associated with either T-type calcium “low-threshold” or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1 burst size, (2 burst onset time, and (3 precise spike timing within bursts. Strikingly, this latter “intraburst” encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.

  12. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  13. Adaptation of the U.S. Food Security Survey Module for Low-Income Pregnant Latinas: Qualitative Phase

    OpenAIRE

    Hromi-Fiedler, Amber; Bermúdez-Millán, Angela; Segura-Pérez, Sofia; Damio, Grace; Pérez-Escamilla, Rafael

    2009-01-01

    The objectives of this study were to: 1) assessed the face validity of the 18-items US Household Food Security Scale Module (US HFSSM) among low-income pregnant Latinas and 2) adapt the US HFSSM to the target population. This study was conducted in the United States in Hartford, Connecticut where 40% of residents are of Latina descent. Three focus groups (N=14total) were held with pregnant and postpartum Latinas from April – June 2004 to assess the understanding and applicability (face validi...

  14. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Directory of Open Access Journals (Sweden)

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  15. Experimental demonstration of real-time adaptively modulated DDO-OFDM systems with a high spectral efficiency up to 5.76bit/s/Hz transmission over SMF links.

    Science.gov (United States)

    Chen, Ming; He, Jing; Tang, Jin; Wu, Xian; Chen, Lin

    2014-07-28

    In this paper, a FPGAs-based real-time adaptively modulated 256/64/16QAM-encoded base-band OFDM transceiver with a high spectral efficiency up to 5.76bit/s/Hz is successfully developed, and experimentally demonstrated in a simple intensity-modulated direct-detection optical communication system. Experimental results show that it is feasible to transmit a raw signal bit rate of 7.19Gbps adaptively modulated real-time optical OFDM signal over 20km and 50km single mode fibers (SMFs). The performance comparison between real-time and off-line digital signal processing is performed, and the results show that there is a negligible power penalty. In addition, to obtain the best transmission performance, direct-current (DC) bias voltage for MZM and launch power into optical fiber links are explored in the real-time optical OFDM systems.

  16. Hole-thru-laminate mounting supports for photovoltaic modules

    Science.gov (United States)

    Wexler, Jason; Botkin, Jonathan; Culligan, Matthew; Detrick, Adam

    2015-02-17

    A mounting support for a photovoltaic module is described. The mounting support includes a pedestal having a surface adaptable to receive a flat side of a photovoltaic module laminate. A hole is disposed in the pedestal, the hole adaptable to receive a bolt or a pin used to couple the pedestal to the flat side of the photovoltaic module laminate.

  17. ELCOS: the PSI code system for LWR core analysis. Part II: user's manual for the fuel assembly code BOXER

    International Nuclear Information System (INIS)

    Paratte, J.M.; Grimm, P.; Hollard, J.M.

    1996-02-01

    ELCOS is a flexible code system for the stationary simulation of light water reactor cores. It consists of the four computer codes ETOBOX, BOXER, CORCOD and SILWER. The user's manual of the second one is presented here. BOXER calculates the neutronics in cartesian geometry. The code can roughly be divided into four stages: - organisation: choice of the modules, file manipulations, reading and checking of input data, - fine group fluxes and condensation: one-dimensional calculation of fluxes and computation of the group constants of homogeneous materials and cells, - two-dimensional calculations: geometrically detailed simulation of the configuration in few energy groups, - burnup: evolution of the nuclide densities as a function of time. This manual shows all input commands which can be used while running the different modules of BOXER. (author) figs., tabs., refs

  18. RBMK-LOCA-Analyses with the ATHLET-Code

    Energy Technology Data Exchange (ETDEWEB)

    Petry, A. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Kurfuerstendamm, Berlin (Germany); Domoradov, A.; Finjakin, A. [Research and Development Institute of Power Engineering, Moscow (Russian Federation)

    1995-09-01

    The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.

  19. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  20. QCA Gray Code Converter Circuits Using LTEx Methodology

    Science.gov (United States)

    Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan

    2018-04-01

    The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.

  1. Development Status of Diffusion Code RAST-K 2.0 at UNIST

    Energy Technology Data Exchange (ETDEWEB)

    Park, Minyong; Zheng, Youqi; Choe, Jiwon; Zhang, Peng; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of); Lee, Eunki; Shin, Hocheol [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The non-linear scheme was used based on the 2-group CMFD and a three dimensional multi -group unified nodal method (UNM). To consider the history effects, the main heavy isotopes were tracked by micro-depletion module using CRAM. The simplified 1-D single channel thermal hydraulic solver from nTACER is implemented. The θ method was adopted in the transient calculation. To get detailed pin-wise power and burnup distribution, Pin power reconstruction module was implemented. Also automatic control logic to calculate MTC, FTC, control rod worth was implemented. To perform multicycle analysis, restart and shuffling/rotation module has been implemented. To link between CASMO-4E and RAST-K 2.0, CATORA (CASMO TO RAST-K 2.0) code was developed. Unlike the other diffusion codes, RAST-K 2.0 depletion module uses CRAM and extended depletion chain for fission products. Most lattice codes give cumulative fission yield of Pm-149 without considering Pm-148 and Pm-149 capture reaction which will lead to the increase of Sm-149 number density. This paper reports the status of RAST-K 2.0 code development at UNIST. The new code applies a new kernel based on the two-node UNM with CMFD, and θ method for kinetic calculation. Also, the microdepletion calculation is used to consider the history effects. And other modules and functions also implemented such as pin power reconstruction, branch calculation, restart, multi-cycle, and 1-D single channel T/H solver.

  2. Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.

    Science.gov (United States)

    Li, Jun; Zhao, Patrick X

    2016-01-01

    Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/.

  3. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corp., Tokyo (Japan)

    2012-07-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  4. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2012-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  5. Development of the code package KASKAD for calculations of WWERs

    International Nuclear Information System (INIS)

    Bolobov, P.A.; Lazarenko, A.P.; Tomilov, M.Ju.

    2008-01-01

    The new version of software package for neutron calculation of WWER cores KASKAD 2007 consists of some calculating and service modules, which are integrated in the common framework. The package is based on the old version, which was expanded with some new functions and the new calculating modules, such as: -the BIPR-2007 code is the new one which performs calculation of power distribution in three-dimensional geometry for 2-group neutron diffusion calculation. This code is based on the BIPR-8KN model, provides all possibilities of BIPR-7A code and uses the same input data; -the PERMAK-2007 code is pin-by-pin few-group multilayer and 3-D code for neutron diffusion calculation; -graphical user interface for input data preparation of the TVS-M code. The report also includes some calculation results obtained with modified version of the KASKAD 2007 package. (Authors)

  6. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  7. Development of 3-dimensional neutronics kinetics analysis code for CANDU-PHWR

    International Nuclear Information System (INIS)

    Kim, M. W.; Kim, C. H.; Hong, I. S.

    2005-02-01

    The followings are the major contents and scope of the research : development of kinetics power calculation module, formulation of space-dependent neutron transient analysis - implementation of 3-D and 2-G unified nodal method, verification of the kinetics module by benchmark problem - 3-D PHWR kinetics benchmark problem suggested by AECL, reactor trip simulation by shutdown system 1 in Wolsong unit 2. Development of a dynamic linked library code, SCAN D LL, for the coupled calculation with RELAP-CANDU : modeling of shutdown system 1, development of automatic shutdown module - automatic trip module based on rate log power control logic, automatic insertion of shutdown system 1. Development of a link code for coupled calculation - development of SCAN D LL(windows version), verification of coupled code by - 40% reactor inlet header break LOCA power pulse, 100% reactor outlet header break LOCA power pulse, 50% pump suction break LOCA power pulse

  8. Parallelization characteristics of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H. G.; Kim, H. Y.; Lee, C. C.; Chang, M. H.; Zee, S. Q.

    2003-12-01

    This report is to describe the parallelization characteristics of the DeCART code and also examine its parallel performance. Parallel computing algorithms are implemented to DeCART to reduce the tremendous computational burden and memory requirement involved in the three-dimensional whole core transport calculation. In the parallelization of the DeCART code, the axial domain decomposition is first realized by using MPI (Message Passing Interface), and then the azimuthal angle domain decomposition by using either MPI or OpenMP. When using the MPI for both the axial and the angle domain decomposition, the concept of MPI grouping is employed for convenient communication in each communication world. For the parallel computation, most of all the computing modules except for the thermal hydraulic module are parallelized. These parallelized computing modules include the MOC ray tracing, CMFD, NEM, region-wise cross section preparation and cell homogenization modules. For the distributed allocation, most of all the MOC and CMFD/NEM variables are allocated only for the assigned planes, which reduces the required memory by a ratio of the number of the assigned planes to the number of all planes. The parallel performance of the DeCART code is evaluated by solving two problems, a rodded variation of the C5G7 MOX three-dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In the aspect of parallel performance, the DeCART code shows a good speedup of about 40.1 and 22.4 in the ray tracing module and about 37.3 and 20.2 in the total computing time when using 48 CPUs on the IBM Regatta and 24 CPUs on the LINUX cluster, respectively. In the comparison between the MPI and OpenMP, OpenMP shows a somewhat better performance than MPI. Therefore, it is concluded that the first priority in the parallel computation of the DeCART code is in the axial domain decomposition by using MPI, and then in the angular domain using OpenMP, and finally the angular

  9. Adaptation in Coding by Large Populations of Neurons in the Retina

    Science.gov (United States)

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent

  10. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  11. Induction of Osmoadaptive Mechanisms and Modulation of Cellular Physiology Help Bacillus licheniformis Strain SSA 61 Adapt to Salt Stress

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Sangeeta; Aggarwal, Chetana; Thakur, Jyoti Kumar; Bandeppa, G. S.; Khan, Md. Aslam; Pearson, Lauren M.; Babnigg, Gyorgy; Giometti, Carol S.; Joachimiak, Andrzej

    2015-01-06

    Bacillus licheniformis strain SSA 61, originally isolated from Sambhar salt lake, was observed to grow even in the presence of 25 % salt stress. Osmoadaptive mechanisms of this halotolerant B. licheniformis strain SSA 61, for long-term survival and growth under salt stress, were determined. Proline was the preferentially accumulated compatible osmolyte. There was also increased accumulation of antioxidants ascorbic acid and glutathione. Among the different antioxidative enzymes assayed, superoxide dismutase played the most crucial role in defense against salt-induced stress in the organism. Adaptation to stress by the organism involved modulation of cellular physiology at various levels. There was enhanced expression of known proteins playing essential roles in stress adaptation, such as chaperones DnaK and GroEL, and general stress protein YfkM and polynucleotide phosphorylase/polyadenylase. Proteins involved in amino acid biosynthetic pathway, ribosome structure, and peptide elongation were also overexpressed. Salt stress-induced modulation of expression of enzymes involved in carbon metabolism was observed. There was up-regulation of a number of enzymes involved in generation of NADH and NADPH, indicating increased cellular demand for both energy and reducing power.

  12. Italian electricity supply contracts optimization: ECO computer code

    International Nuclear Information System (INIS)

    Napoli, G.; Savelli, D.

    1993-01-01

    The ECO (Electrical Contract Optimization) code written in the Microsoft WINDOWS 3.1 language can be handled with a 286 PC and a minimum of RAM. It consists of four modules, one for the calculation of ENEL (Italian National Electricity Board) tariffs, one for contractual time-of-use tariffs optimization, a table of tariff coefficients, and a module for monthly power consumption calculations based on annual load diagrams. The optimization code was developed by ENEA (Italian Agency for New Technology, Energy and the Environment) to help Italian industrial firms comply with new and complex national electricity supply contractual regulations and tariffs. In addition to helping industrial firms determine optimum contractual arrangements, the code also assists them in optimizing their choice of equipment and production cycles

  13. Modular Modeling System (MMS) code: a versatile power plant analysis package

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Wong, F.K.L.

    1987-01-01

    The basic version of the Modular Modeling System (MMS-01), a power plant systems analysis computer code jointly developed by the Nuclear Power and the Coal Combustion Systems Divisions of the Electric Power Research Institute (EPRI), has been released to the utility power industry in April 1983 at a code release workshop held in Charlotte, North Carolina. Since then, additional modules have been developed to analyze the Pressurized Water Reactors (PWRs) and the Boiling Water Reactors (BWRs) when the safety systems are activated. Also, a selected number of modules in the MMS-01 library have been modified to allow the code users more flexibility in constructing plant specific systems for analysis. These new PWR and BWR modules constitute the new MMS library, and it includes the modifications to the MMS-01 library. A year and half long extensive code qualification program of this new version of the MMS code at EPRI and the contractor sites, back by further code testing in an user group environment is culminating in the MMS-02 code release announcement seminar. At this seminar, the results of user group efforts and the code qualification program will be presented in a series of technical sessions. A total of forty-nine papers will be presented to describe the new code features and the code qualification efforts. For the sake of completion, an overview of the code is presented to include the history of the code development, description of the MMS code and its structure, utility engineers involvement in MMS-01 and MMS-02 validations, the enhancements made in the last 18 months to the code, and finally the perspective on the code future in the fossil and nuclear industry

  14. Least reliable bits coding (LRBC) for high data rate satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  15. Code Reuse and Modularity in Python

    Directory of Open Access Journals (Sweden)

    William J. Turkel

    2012-07-01

    Full Text Available Computer programs can become long, unwieldy and confusing without special mechanisms for managing complexity. This lesson will show you how to reuse parts of your code by writing Functions and break your programs into Modules, in order to keep everything concise and easier to debug. Being able to remove a single dysfunctional module can save time and effort.

  16. INVESTIGATION ON THERMAL-FLOW CHARACTERISTICS OF HTGR CORE USING THERMIX-KONVEK MODULE AND VSOP'94 CODE

    Directory of Open Access Journals (Sweden)

    Sudarmono Sudarmono

    2015-03-01

    Full Text Available The failure of heat removal system of water-cooled reactor such as PWR in Three Mile Islands and Fukushima Daiichi BWR makes nuclear society starting to consider the use of high temperature gas-cooled reactor (HTGR. Reactor Physics and Technology Division – Center for Nuclear Reactor Safety and Technology  (PTRKN has tasks to perform research and development on the conceptual design of cogeneration gas cooled reactor with medium power level of 200 MWt. HTGR is one of nuclear energy generation system, which has high energy efficiency, and has high and clean inherent safety level. The geometry and structure of the HTGR200 core are designed to produce the output of helium gas coolant temperature as high as 950 °C to be used for hydrogen production and other industrial processes in co-generative way. The output of very high temperature helium gas will cause thermal stress on the fuel pebble that threats the integrity of fission product confinement. Therefore, it is necessary to perform thermal-flow evaluation to determine the temperature distribution in the graphite and fuel pebble in the HTGR core. The evaluation was carried out by Thermix-Konvek module code that has been already integrated into VSOP'94 code. The HTGR core geometry was done using BIRGIT module code for 2-D model (RZ model with 5 channels of pebble flow in active core in the radial direction. The evaluation results showed that the highest and lowest temperatures in the reactor core are 999.3 °C and 886.5 °C, while the highest temperature of TRISO UO2 is 1510.20 °C in the position (z= 335.51 cm; r=0 cm. The analysis done based on reactor condition of 120 kg/s of coolant mass flow rate, 7 MPa of pressure and 200 MWth of power. Compared to the temperature distribution resulted between VSOP’94 code and fuel temperature limitation as high as 1600 oC, there is enough safety margin from melting or disintegrating. Keywords: Thermal-Flow, VSOP’94, Thermix-Konvek, HTGR, temperature

  17. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs

  18. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J M; Ahnert, C; Gomez Santamaria, J; Rodriguez Olabarria, I

    1985-07-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs.

  19. Nevada Administrative Code for Special Education Programs.

    Science.gov (United States)

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  20. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  1. Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3

    International Nuclear Information System (INIS)

    Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok

    1998-09-01

    The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs

  2. Improvement of multi-dimensional realistic thermal-hydraulic system analysis code, MARS 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Chung, Bub Dong; Jeong, Jae Jun; Ha, Kwi Seok

    1998-09-01

    The MARS (Multi-dimensional Analysis of Reactor Safety) code is a multi-dimensional, best-estimate thermal-hydraulic system analysis code. This report describes the new features that have been improved in the MARS 1.3 code since the release of MARS 1.3 in July 1998. The new features include: - implementation of point kinetics model into the 3D module - unification of the heat structure model - extension of the control function to the 3D module variables - improvement of the 3D module input check function. Each of the items has been implemented in the developmental version of the MARS 1.3.1 code and, then, independently verified and assessed. The effectiveness of the new features is well verified and it is shown that these improvements greatly extend the code capability and enhance the user friendliness. Relevant input data changes are also described. In addition to the improvements, this report briefly summarizes the future code developmental activities that are being carried out or planned, such as coupling of MARS 1.3 with the containment code CONTEMPT and the three-dimensional reactor kinetics code MASTER 2.0. (author). 8 refs.

  3. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  4. ELCOS: the PSI code system for LWR core analysis. Part II: user`s manual for the fuel assembly code BOXER

    Energy Technology Data Exchange (ETDEWEB)

    Paratte, J.M.; Grimm, P.; Hollard, J.M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-02-01

    ELCOS is a flexible code system for the stationary simulation of light water reactor cores. It consists of the four computer codes ETOBOX, BOXER, CORCOD and SILWER. The user`s manual of the second one is presented here. BOXER calculates the neutronics in cartesian geometry. The code can roughly be divided into four stages: - organisation: choice of the modules, file manipulations, reading and checking of input data, - fine group fluxes and condensation: one-dimensional calculation of fluxes and computation of the group constants of homogeneous materials and cells, - two-dimensional calculations: geometrically detailed simulation of the configuration in few energy groups, - burnup: evolution of the nuclide densities as a function of time. This manual shows all input commands which can be used while running the different modules of BOXER. (author) figs., tabs., refs.

  5. Water System Adaptation to Hydrological Changes: Module 1, Introduction to Water System Adaptation

    Science.gov (United States)

    Contemporary water management requires resilience, the ability to meet ever increasing water needs, and capacity to adapt to abrupt or transient changes in water quality and availability. For this purpose, effective adaptation to extreme hydrological events (e.g. intense storms, ...

  6. Development of the CAT code - YGN 5 and 6 CVCS analysis tool

    International Nuclear Information System (INIS)

    Kim, S.W.; Sohn, S.H.; Seo, J.T.; Lee, S.K.

    1996-01-01

    The CAT code has been developed for the analysis of the Chemical and Volume Control System (CVCS) of the Yonggwang Nuclear Power Plant Units 5 and 6(YGN 5 and 6). The code is able to simulate the system behaviors in the operating conditions which should be considered in the design of the system. It has been developed as a stand alone code which can simulate CVCS in detail whenever correct system boundary conditions are provided. The code consists of two modules, i.e. control and process modules. The control module includes the models for the Pressurizer Level Control System, the Letdown Backpressure Control System, the Charging Backpressure Control System, and the Seal Injection Control System. Thermal-hydraulic responses of the system are simulated by the process module. The modeling of the system is based on a node and flowpath network. The thermal-hydraulic model is based on the assumption of homogeneous equilibrium mixture. The major system components such as valves, orifices, pumps, heat exchangers and the volume control tank are explicitly modeled in the code. The code was validated against the measured data from the letdown system test performed during the Hot Functional Testing at YGN 3. The comparison between the measured and predicted data demonstrated that the present model can predict the observed phenomena with sufficient accuracy. (author)

  7. SRAC95; general purpose neutronics code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Tsuchihashi, Keichiro; Kaneko, Kunio.

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author)

  8. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  9. Implementation of a dry process fuel cycle model into the DYMOND code

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jeong, Chang Joon; Choi, Hang Bok

    2004-01-01

    For the analysis of a dry process fuel cycle, new modules were implemented into the fuel cycle analysis code DYMOND, which was developed by the Argonne National Laboratory. The modifications were made to the energy demand prediction model, a Canada Deuterium Uranium (CANDU) reactor, direct use of spent Pressurized Water Reactor (PWR) fuel in CANDU reactors (DUPIC) fuel cycle model, the fuel cycle calculation module, and the input/output modules. The performance of the modified DYMOND code was assessed for the postulated once-through fuel cycle models including both the PWR and CANDU reactor. This paper presents modifications of the DYMOND code and the results of sample calculations for the PWR once-through and DUPIC fuel cycles

  10. Pseudo color ghost coding imaging with pseudo thermal light

    Science.gov (United States)

    Duan, De-yang; Xia, Yun-jie

    2018-04-01

    We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.

  11. Gentiana asclepiadea and Armoracia rusticana can modulate the adaptive response induced by zeocin in human lymphocytes.

    Science.gov (United States)

    Hudecova, A; Hasplova, K; Kellovska, L; Ikreniova, M; Miadokova, E; Galova, E; Horvathova, E; Vaculcikova, D; Gregan, F; Dusinska, M

    2012-01-01

    Zeocin is a member of bleomycin/phleomycin family of antibiotics isolated from Streptomyces verticullus. This unique radiomimetic antibiotic is known to bind to DNA and induce oxidative stress in different organisms producing predominantly single- and double- strand breaks, as well as a DNA base loss resulting in apurinic/apyrimidinic (AP) sites. The aim of this study was to induce an adaptive response (AR) by zeocin in freshly isolated human lymphocytes from blood and to observe whether plant extracts could modulate this response. The AR was evaluated by the comet assay. The optimal conditions for the AR induction and modulation were determined as: 2 h-intertreatment time (in PBS, at 4°C) given after a priming dose (50 µg/ml) of zeocin treatment. Genotoxic impact of zeocin to lymphocytes was modulated by plant extracts isolated from Gentiana asclepiadea (methanolic and aqueous haulm extracts, 0.25 mg/ml) and Armoracia rusticana (methanolic root extract, 0.025 mg/ml). These extracts enhanced the AR and also decreased DNA damage caused by zeocin (after 0, 1 and 4 h-recovery time after the test dose of zeocin application) to more than 50%. These results support important position of plants containing many biologically active compounds in the field of pharmacology and medicine.

  12. Cross-layer designed adaptive modulation algorithm with packet combining and truncated ARQ over MIMO Nakagami fading channels

    KAUST Repository

    Aniba, Ghassane

    2011-04-01

    This paper presents an optimal adaptive modulation (AM) algorithm designed using a cross-layer approach which combines truncated automatic repeat request (ARQ) protocol and packet combining. Transmissions are performed over multiple-input multiple-output (MIMO) Nakagami fading channels, and retransmitted packets are not necessarily modulated using the same modulation format as in the initial transmission. Compared to traditional approach, cross-layer design based on the coupling across the physical and link layers, has proven to yield better performance in wireless communications. However, there is a lack for the performance analysis and evaluation of such design when the ARQ protocol is used in conjunction with packet combining. Indeed, previous works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show that the packet loss rate (PLR) resulting from the combining of packets modulated with different constellations can be well approximated by an exponential function. This model is then used in the design of an optimal AM algorithm for systems employing packet combining, truncated ARQ and MIMO antenna configurations, considering transmission over Nakagami fading channels. Numerical results are provided for operation with or without packet combining, and show the enhanced performance and efficiency of the proposed algorithm in comparison with existing ones. © 2011 IEEE.

  13. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  14. The Nudo, Rollo, Melon codes and nodal correlations

    International Nuclear Information System (INIS)

    Perlado, J.M.; Aragones, J.M.; Minguez, E.; Pena, J.

    1975-01-01

    Analysis of nodal calculation and checking results by the reference reactor experimental data. Nudo code description, adapting experimental data to nodal calculations. Rollo, Melon codes as improvement in the cycle life calculations of albedos, mixing parameters and nodal correlations. (author)

  15. The utility of adaptive eLearning in cervical cytopathology education.

    Science.gov (United States)

    Samulski, T Danielle; Taylor, Laura A; La, Teresa; Mehr, Chelsea R; McGrath, Cindy M; Wu, Roseann I

    2018-02-01

    Adaptive eLearning allows students to experience a self-paced, individualized curriculum based on prior knowledge and learning ability. The authors investigated the effectiveness of adaptive online modules in teaching cervical cytopathology. eLearning modules were created that covered basic concepts in cervical cytopathology, including artifacts and infections, squamous lesions (SL), and glandular lesions (GL). The modules used student responses to individualize the educational curriculum and provide real-time feedback. Pathology trainees and faculty from the authors' institution were randomized into 2 groups (SL or GL), and identical pre-tests and post-tests were used to compare the efficacy of eLearning modules versus traditional study methods (textbooks and slide sets). User experience was assessed with a Likert scale and free-text responses. Sixteen of 17 participants completed the SL module, and 19 of 19 completed the GL module. Participants in both groups had improved post-test scores for content in the adaptive eLearning module. Users indicated that the module was effective in presenting content and concepts (Likert scale [from 1 to 5], 4.3 of 5.0), was an efficient and convenient way to review the material (Likert scale, 4.4 of 5.0), and was more engaging than lectures and texts (Likert scale, 4.6 of 5.0). Users favored the immediate feedback and interactivity of the module. Limitations included the inability to review prior content and slow upload time for images. Learners demonstrated improvement in their knowledge after the use of adaptive eLearning modules compared with traditional methods. Overall, the modules were viewed positively by participants. Adaptive eLearning modules can provide an engaging and effective adjunct to traditional teaching methods in cervical cytopathology. Cancer Cytopathol 2018;126:129-35. © 2017 American Cancer Society. © 2017 American Cancer Society.

  16. Neural coding in graphs of bidirectional associative memories.

    Science.gov (United States)

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Melting and evaporation analysis of the first wall in a water-cooled breeding blanket module under vertical displacement event by using the MARS code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Geon-Woo [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826 (Korea, Republic of); Im, Kihak [National Fusion Research Institute, 169-148 Gwahak-ro, Yuseong-gu, Daejeon 34133 (Korea, Republic of)

    2017-05-15

    Highlights: • Material phase change of first wall was simulated for vertical displacement event. • An in-house first wall module was developed to simulate melting and evaporation. • Effective heat capacity method and evaporation model were proposed. • MARS code was proposed to predict two-phase phenomena in coolant channel. • Phase change simulation was performed by coupling MARS and in-house module. - Abstract: Plasma facing components of tokamak reactors such as ITER or the Korean fusion demonstration reactor (K-DEMO) can be subjected to damage by plasma instabilities. Plasma disruptions like vertical displacement event (VDE) with high heat flux, can cause melting and vaporization of plasma facing materials and burnout of coolant channels. In this study, to simulate melting and vaporization of the first wall in a water-cooled breeding blanket under VDE, one-dimensional heat equations were solved numerically by using an in-house first wall module, including phase change models, effective heat capacity method, and evaporation model. For thermal-hydraulics, the in-house first wall analysis module was coupled with the nuclear reactor safety analysis code, MARS, to take advantage of its prediction capability for two-phase flow and critical heat flux (CHF) occurrence. The first wall was proposed for simulation according to the conceptual design of the K-DEMO, and the heat flux of plasma disruption with a value of 600 MW/m{sup 2} for 0.1 s was applied. The phase change simulation results were analyzed in terms of the melting and evaporation thicknesses and the occurrence of CHF. The thermal integrity of the blanket first wall is discussed to confirm whether the structural material melts for the given conditions.

  18. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  19. Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor

    Directory of Open Access Journals (Sweden)

    Po Hu

    2014-01-01

    Full Text Available The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in the code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.

  20. Evaluation of the efficiency and fault density of software generated by code generators

    Science.gov (United States)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  1. Quantum computing with Majorana fermion codes

    Science.gov (United States)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  2. Performance analysis of joint multi-branch switched diversity and adaptive modulation schemes for spectrum sharing systems

    KAUST Repository

    Bouida, Zied

    2012-12-01

    Under the scenario of an underlay cognitive radio network, we propose in this paper two adaptive schemes using switched transmit diversity and adaptive modulation in order to increase the spectral efficiency of the secondary link and maintain a desired performance for the primary link. The proposed switching efficient scheme (SES) and bandwidth efficient scheme (BES) use the scan and wait combining technique (SWC) where a transmission occurs only when a branch with an acceptable performance is found, otherwise data is buffered. In these schemes, the modulation constellation size and the used transmit branch are determined to minimize the average number of switched branches and to achieve the highest spectral efficiency given the fading channel conditions, the required error rate performance, and a peak interference constraint to the primary receiver (PR). For delay-sensitive applications, we also propose two variations of the SES and BES schemes using power control (SES-PC and BES-PC) where the secondary transmitter (ST) starts sending data using a nominal power level which is selected in order to minimize the average delay introduced by the SWC technique. We demonstrate through numerical examples that the BES scheme increases the capacity of the secondary link when compared to the SES scheme. This spectral efficiency improvement comes at the expense of an increased average number of switched branches and thus an increased average delay. We also show that the SES-PC and the BES-PC schemes minimize the average delay while satisfying the same spectral efficiency as the SES and BES schemes, respectively. © 2012 IEEE.

  3. Development and validation of a nodal code for core calculation

    International Nuclear Information System (INIS)

    Nowakowski, Pedro Mariano

    2004-01-01

    The code RHENO solves the multigroup three-dimensional diffusion equation using a nodal method of polynomial expansion.A comparative study has been made between this code and present internationals nodal diffusion codes, resulting that the RHENO is up to date.The RHENO has been integrated to a calculation line and has been extend to make burnup calculations.Two methods for pin power reconstruction were developed: modulation and imbedded. The modulation method has been implemented in a program, while the implementation of the imbedded method will be concluded shortly.The validation carried out (that includes experimental data of a MPR) show very good results and calculation efficiency

  4. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  5. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.

  6. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 2, Part 3: Functional modules F16--F17; Revision 5

    International Nuclear Information System (INIS)

    1997-03-01

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system has been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system

  7. A structure-based approach to evaluation product adaptability in adaptable design

    International Nuclear Information System (INIS)

    Cheng, Qiang; Liu, Zhifeng; Cai, Ligang; Zhang, Guojun; Gu, Peihua

    2011-01-01

    Adaptable design, as a new design paradigm, involves creating designs and products that can be easily changed to satisfy different requirements. In this paper, two types of product adaptability are proposed as essential adaptability and behavioral adaptability, and through measuring which respectively a model for product adaptability evaluation is developed. The essential adaptability evaluation proceeds with analyzing the independencies of function requirements and function modules firstly based on axiomatic design, and measuring the adaptability of interfaces secondly with three indices. The behavioral adaptability reflected by the performance of adaptable requirements after adaptation is measured based on Kano model. At last, the effectiveness of the proposed method is demonstrated by an illustrative example of the motherboard of a personal computer. The results show that the method can evaluate and reveal the adaptability of a product in essence, and is of directive significance to improving design and innovative design

  8. Molecular adaptation during adaptive radiation in the Hawaiian endemic genus Schiedea.

    Directory of Open Access Journals (Sweden)

    Maxim V Kapralov

    2006-12-01

    Full Text Available "Explosive" adaptive radiations on islands remain one of the most puzzling evolutionary phenomena. The rate of phenotypic and ecological adaptations is extremely fast during such events, suggesting that many genes may be under fairly strong selection. However, no evidence for adaptation at the level of protein coding genes was found, so it has been suggested that selection may work mainly on regulatory elements. Here we report the first evidence that positive selection does operate at the level of protein coding genes during rapid adaptive radiations. We studied molecular adaptation in Hawaiian endemic plant genus Schiedea (Caryophyllaceae, which includes closely related species with a striking range of morphological and ecological forms, varying from rainforest vines to woody shrubs growing in desert-like conditions on cliffs. Given the remarkable difference in photosynthetic performance between Schiedea species from different habitats, we focused on the "photosynthetic" Rubisco enzyme, the efficiency of which is known to be a limiting step in plant photosynthesis.We demonstrate that the chloroplast rbcL gene, encoding the large subunit of Rubisco enzyme, evolved under strong positive selection in Schiedea. Adaptive amino acid changes occurred in functionally important regions of Rubisco that interact with Rubisco activase, a chaperone which promotes and maintains the catalytic activity of Rubisco. Interestingly, positive selection acting on the rbcL might have caused favorable cytotypes to spread across several Schiedea species.We report the first evidence for adaptive changes at the DNA and protein sequence level that may have been associated with the evolution of photosynthetic performance and colonization of new habitats during a recent adaptive radiation in an island plant genus. This illustrates how small changes at the molecular level may change ecological species performance and helps us to understand the molecular bases of extremely

  9. A restructuring proposal based on MELCOR for severe accident analysis code development

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sun Hee; Song, Y. M.; Kim, D. H. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    In order to develop a template based on existing MELCOR code, current data saving and transferring methods used in MELCOR are addressed first. Then a naming convention for the constructed module is suggested and an automatic program to convert old variables into new derived type variables has been developed. Finally, a restructured module for the SPR package has been developed to be applied to MELCOR. The current MELCOR code ensures a fixed-size storage for four different data types, and manages the variable-sized data within the storage limit by storing the data on the stacked packages. It uses pointer to identify the variables between the packages. This technique causes a difficult grasping of the meaning of the variables as well as memory waste. New features of FORTRAN90, however, make it possible to allocate the storage dynamically, and to use the user-defined data type which lead to a restructured module development for the SPR package. An efficient memory treatment and as easy understanding of the code are allowed in this developed module. The validation of the template has been done by comparing the results of the modified code with those from the existing code, and it is confirmed that the results are the same. The template for the SPR package suggested in this report hints the extension of the template to the entire code. It is expected that the template will accelerate the code domestication thanks to direct understanding of each variable and easy implementation of modified or newly developed models. 3 refs., 15 figs., 16 tabs. (Author)

  10. Ultra-fast ipsilateral DPOAE adaptation not modulated by attention?

    Science.gov (United States)

    Dalhoff, Ernst; Zelle, Dennis; Gummer, Anthony W.

    2018-05-01

    Efferent stimulation of outer hair cells is supposed to attenuate cochlear amplification of sound waves and is accompanied by reduced DPOAE amplitudes. Recently, a method using two subsequent f2 pulses during presentation of a longer f1 pulse was introduced to measure fast ipsilateral adaptation effects on separated DPOAE components. Compensating primary-tone onsets for their latencies at the f2-tonotopic place, the average adaptation measured in four normal-hearing subjects was 5.0 dB with a time constant below 5 ms. In the present study, two experiments were performed to determine the origin of this ultra-fast ipsilateral adaptation effect. The first experiment measured ultra-fast ipsilateral adaptation using a two-pulse paradigm at three frequencies in the four subjects, while controlling for visual attention of the subjects. The other experiment also controlled for visual attention, but utilized a sequence of f2 short pulses in the presence of a continuous f1 tone to sample ipsilateral adaptation effects with longer time constants in eight subjects. In the first experiment, no significant change in the ultra-fast adaptation between non-directed attention and visual attention could be detected. In contrast, the second experiment revealed significant changes in the magnitude of the slower ipsilateral adaptation in the visual-attention condition. In conclusion, the lack of an attentional influence indicates that the ultra-fast ipsilateral DPOAE adaptation is not solely mediated by the medial olivocochlear reflex.

  11. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  12. Adaptive Relay Activation in the Network Coding Protocols

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  13. Improvements to SOIL: An Eulerian hydrodynamics code

    International Nuclear Information System (INIS)

    Davis, C.G.

    1988-04-01

    Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs

  14. Adaptive electron beam shaping using a photoemission gun and spatial light modulator

    Science.gov (United States)

    Maxson, Jared; Lee, Hyeri; Bartnik, Adam C.; Kiefer, Jacob; Bazarov, Ivan

    2015-02-01

    The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam image to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.

  15. Adaptive electron beam shaping using a photoemission gun and spatial light modulator

    Directory of Open Access Journals (Sweden)

    Jared Maxson

    2015-02-01

    Full Text Available The need for precisely defined beam shapes in photoelectron sources has been well established. In this paper, we use a spatial light modulator and simple shaping algorithm to create arbitrary, detailed transverse laser shapes with high fidelity. We transmit this shaped laser to the photocathode of a high voltage dc gun. Using beam currents where space charge is negligible, and using an imaging solenoid and fluorescent viewscreen, we show that the resultant beam shape preserves these detailed features with similar fidelity. Next, instead of transmitting a shaped laser profile, we use an active feedback on the unshaped electron beam image to create equally accurate and detailed shapes. We demonstrate that this electron beam feedback has the added advantage of correcting for electron optical aberrations, yielding shapes without skew. The method may serve to provide precisely defined electron beams for low current target experiments, space-charge dominated beam commissioning, as well as for online adaptive correction of photocathode quantum efficiency degradation.

  16. An optimization of a GPU-based parallel wind field module

    International Nuclear Information System (INIS)

    Pinheiro, André L.S.; Shirru, Roberto

    2017-01-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  17. An optimization of a GPU-based parallel wind field module

    Energy Technology Data Exchange (ETDEWEB)

    Pinheiro, André L.S.; Shirru, Roberto [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Pereira, Cláudio M.N.A., E-mail: apinheiro99@gmail.com, E-mail: schirru@lmp.ufrj.br, E-mail: cmnap@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Atmospheric radionuclide dispersion systems (ARDS) are important tools to predict the impact of radioactive releases from Nuclear Power Plants and guide people evacuation from affected areas. Four modules comprise ARDS: Source Term, Wind Field, Plume Dispersion and Doses Calculations. The slowest is the Wind Field Module that was previously parallelized using the CUDA C language. The statement purpose of this work is to show the speedup gain with the optimization of the already parallel code of the GPU-based Wind Field module, based in WEST model (Extrapolated from Stability and Terrain). Due to the parallelization done in the wind field module, it was observed that some CUDA processors became idle, thus contributing to a reduction in speedup. It was proposed in this work a way of allocating these idle CUDA processors in order to increase the speedup. An acceleration of about 4 times can be seen in the comparative case study between the regular CUDA code and the optimized CUDA code. These results are quite motivating and point out that even after a parallelization of code, a parallel code optimization should be taken into account. (author)

  18. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  19. Depletion methodology in the 3-D whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Zee, Sung Quun

    2005-02-01

    Three dimensional whole-core transport code DeCART has been developed to include a characteristics of the numerical reactor to replace partly the experiment. This code adopts the deterministic method in simulating the neutron behavior with the least assumption and approximation. This neutronic code is also coupled with the thermal hydraulic code CFD and the thermo mechanical code to simulate the combined effects. Depletion module has been implemented in DeCART code to predict the depleted composition in the fuel. The exponential matrix method of ORIGEN-2 has been used for the depletion calculation. The library of including decay constants, yield matrix and others has been used and greatly simplified for the calculation efficiency. This report summarizes the theoretical backgrounds and includes the verification of the depletion module in DeCART by performing the benchmark calculations.

  20. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

    NARCIS (Netherlands)

    S.M. Bohte (Sander)

    2012-01-01

    htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural