WorldWideScience

Sample records for level adaptive coding

  1. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  2. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  3. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  4. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  5. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  6. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  7. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  8. Adaptive RAC codes employing statistical channel evaluation ...

    African Journals Online (AJOL)

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  9. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  10. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  11. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  12. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  13. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  14. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  15. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  16. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  17. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  18. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  19. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  20. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  1. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  2. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  3. Adaptive Modulation and Coding for LTE Wireless Communication

    Science.gov (United States)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  4. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  5. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  6. A multiobjective approach to the genetic code adaptability problem.

    Science.gov (United States)

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  7. Adaptation of HAMMER computer code to CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Pinheiro, A.M.B.S.; Nair, R.P.K.

    1982-01-01

    The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.) [pt

  8. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  9. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  10. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  11. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  12. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  13. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  14. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  15. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  16. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  17. Satellite Media Broadcasting with Adaptive Coding and Modulation

    Directory of Open Access Journals (Sweden)

    Georgios Gardikis

    2009-01-01

    Full Text Available Adaptive Coding and Modulation (ACM is a feature incorporated into the DVB-S2 satellite specification, allowing real-time adaptation of transmission parameters according to the link conditions. Although ACM was originally designed for optimizing unicast services, this article discusses the expansion of its usage to broadcasting streams as well. For this purpose, a general cross-layer adaptation approach is proposed, along with its realization into a fully functional experimental network, and test results are presented. Finally, two case studies are analysed, assessing the gain derived by ACM in a real large-scale deployment, involving HD services provision to two different geographical areas.

  18. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  19. Adaptation of radiation shielding code to space environment

    International Nuclear Information System (INIS)

    Okuno, Koichi; Hara, Akihisa

    1992-01-01

    Recently, the trend to the development of space has heightened. To the development of space, many problems are related, and as one of them, there is the protection from cosmic ray. The cosmic ray is the radiation having ultrahigh energy, and there was not the radiation shielding design code that copes with cosmic ray so far. Therefore, the high energy radiation shielding design code for accelerators was improved so as to cope with the peculiarity that cosmic ray possesses. Moreover, the calculation of the radiation dose equivalent rate in the moon base to which the countermeasures against cosmic ray were taken was simulated by using the improved code. As the important countermeasures for the safety protection from radiation, the covering with regolith is carried out, and the effect of regolith was confirmed by using the improved code. Galactic cosmic ray, solar flare particles, radiation belt, the adaptation of the radiation shielding code HERMES to space environment, the improvement of the three-dimensional hadron cascade code HETCKFA-2 and the electromagnetic cascade code EGS 4-KFA, and the cosmic ray simulation are reported. (K.I.)

  20. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  1. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  2. The FORTRAN NALAP code adapted to a microcomputer compiler

    International Nuclear Information System (INIS)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  3. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  4. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  5. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  6. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  8. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  9. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    International Nuclear Information System (INIS)

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  10. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz

    2015-09-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed-form solutions for the energy-limited optimal power allocation and investigate the diversity gain of different STC-ARQ schemes. In addition, sufficient conditions are derived for the usefulness of ARQ in terms of energy-limited outage probability. The results show that, for a large range of feedback costs, the energy efficiency is substantially improved by the combination of ARQ and STC techniques if optimal power allocation is utilized. © 2014 IEEE.

  11. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  12. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  13. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  14. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  15. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  16. Feature-level domain adaptation

    DEFF Research Database (Denmark)

    Kouw, Wouter M.; Van Der Maaten, Laurens J P; Krijthe, Jesse H.

    2016-01-01

    -level domain adaptation (flda), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer...... modeled via a dropout distribution, which allows the classiffier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that flda performs on par with state-of-the-art domainadaptation techniques.......Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature...

  17. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  18. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  19. Quadrature amplitude modulation from basics to adaptive trellis-coded turbo-equalised and space-time coded OFDM CDMA and MC-CDMA systems

    CERN Document Server

    Hanzo, Lajos

    2004-01-01

    "Now fully revised and updated, with more than 300 pages of new material, this new edition presents the wide range of recent developments in the field and places particular emphasis on the family of coded modulation aided OFDM and CDMA schemes. In addition, it also includes a fully revised chapter on adaptive modulation and a new chapter characterizing the design trade-offs of adaptive modulation and space-time coding." "In summary, this volume amalgamates a comprehensive textbook with a deep research monograph on the topic of QAM, ensuring it has a wide-ranging appeal for both senior undergraduate and postgraduate students as well as practicing engineers and researchers."--Jacket.

  20. Overall simulation of a HTGR plant with the gas adapted MANTA code

    International Nuclear Information System (INIS)

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  1. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  2. Analysis and Design of Adaptive OCDMA Passive Optical Networks

    Science.gov (United States)

    Hadi, Mohammad; Pakravan, Mohammad Reza

    2017-07-01

    OCDMA systems can support multiple classes of service by differentiating code parameters, power level and diversity order. In this paper, we analyze BER performance of a multi-class 1D/2D OCDMA system and propose a new approximation method that can be used to generate accurate estimation of system BER using a simple mathematical form. The proposed approximation provides insight into proper system level analysis, system level design and sensitivity of system performance to the factors such as code parameters, power level and diversity order. Considering code design, code cardinality and system performance constraints, two design problems are defined and their optimal solutions are provided. We then propose an adaptive OCDMA-PON that adaptively shares unused resources of inactive users among active ones to improve upstream system performance. Using the approximated BER expression and defined design problems, two adaptive code allocation algorithms for the adaptive OCDMA-PON are presented and their performances are evaluated by simulation. Simulation results show that the adaptive code allocation algorithms can increase average transmission rate or decrease average optical power consumption of ONUs for dynamic traffic patterns. According to the simulation results, for an adaptive OCDMA-PON with BER value of 1e-7 and user activity probability of 0.5, transmission rate (optical power consumption) can be increased (decreased) by a factor of 2.25 (0.27) compared to fixed code assignment.

  3. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  4. Adaptation and perceptual norms

    Science.gov (United States)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  5. Decision-level adaptation in motion perception.

    Science.gov (United States)

    Mather, George; Sharman, Rebecca J

    2015-12-01

    Prolonged exposure to visual stimuli causes a bias in observers' responses to subsequent stimuli. Such adaptation-induced biases are usually explained in terms of changes in the relative activity of sensory neurons in the visual system which respond selectively to the properties of visual stimuli. However, the bias could also be due to a shift in the observer's criterion for selecting one response rather than the alternative; adaptation at the decision level of processing rather than the sensory level. We investigated whether adaptation to implied motion is best attributed to sensory-level or decision-level bias. Three experiments sought to isolate decision factors by changing the nature of the participants' task while keeping the sensory stimulus unchanged. Results showed that adaptation-induced bias in reported stimulus direction only occurred when the participants' task involved a directional judgement, and disappeared when adaptation was measured using a non-directional task (reporting where motion was present in the display, regardless of its direction). We conclude that adaptation to implied motion is due to decision-level bias, and that a propensity towards such biases may be widespread in sensory decision-making.

  6. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Directory of Open Access Journals (Sweden)

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  7. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  8. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  9. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Science.gov (United States)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  10. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Energy Technology Data Exchange (ETDEWEB)

    Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  11. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    International Nuclear Information System (INIS)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Nemergut, Daniel

    2017-01-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  12. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  13. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  14. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  15. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  16. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  17. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  18. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  19. Enhanced attention amplifies face adaptation.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  1. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  2. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  3. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  4. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    International Nuclear Information System (INIS)

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  5. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  6. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  7. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  8. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  9. A study on climatic adaptation of dipteran mitochondrial protein coding genes

    Directory of Open Access Journals (Sweden)

    Debajyoti Kabiraj

    2017-10-01

    Full Text Available Diptera, the true flies are frequently found in nature and their habitat is found all over the world including Antarctica and Polar Regions. The number of documented species for order diptera is quite high and thought to be 14% of the total animal present in the earth [1]. Most of the study in diptera has focused on the taxa of economic and medical importance, such as the fruit flies Ceratitis capitata and Bactrocera spp. (Tephritidae, which are serious agricultural pests; the blowflies (Calliphoridae and oestrid flies (Oestridae, which can cause myiasis; the anopheles mosquitoes (Culicidae, are the vectors of malaria; and leaf-miners (Agromyzidae, vegetable and horticultural pests [2]. Insect mitochondrion consists of 13 protein coding genes, 22 tRNAs and 2 rRNAs, are the remnant portion of alpha-proteobacteria is responsible for simultaneous function of energy production and thermoregulation of the cell through the bi-genomic system thus different adaptability in different climatic condition might have compensated by complementary changes is the both genomes [3,4]. In this study we have collected complete mitochondrial genome and occurrence data of one hundred thirteen such dipteran insects from different databases and literature survey. Our understanding of the genetic basis of climatic adaptation in diptera is limited to the basic information on the occurrence location of those species and mito genetic factors underlying changes in conspicuous phenotypes. To examine this hypothesis, we have taken an approach of Nucleotide substitution analysis for 13 protein coding genes of mitochondrial DNA individually and combined by different software for monophyletic group as well as paraphyletic group of dipteran species. Moreover, we have also calculated codon adaptation index for all dipteran mitochondrial protein coding genes. Following this work, we have classified our sample organisms according to their location data from GBIF (https

  10. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    Science.gov (United States)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  11. Intrinsic gain modulation and adaptive neural coding.

    Directory of Open Access Journals (Sweden)

    Sungho Hong

    2008-07-01

    Full Text Available In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

  12. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Directory of Open Access Journals (Sweden)

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  13. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    Science.gov (United States)

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Molecular adaptation during adaptive radiation in the Hawaiian endemic genus Schiedea.

    Directory of Open Access Journals (Sweden)

    Maxim V Kapralov

    2006-12-01

    Full Text Available "Explosive" adaptive radiations on islands remain one of the most puzzling evolutionary phenomena. The rate of phenotypic and ecological adaptations is extremely fast during such events, suggesting that many genes may be under fairly strong selection. However, no evidence for adaptation at the level of protein coding genes was found, so it has been suggested that selection may work mainly on regulatory elements. Here we report the first evidence that positive selection does operate at the level of protein coding genes during rapid adaptive radiations. We studied molecular adaptation in Hawaiian endemic plant genus Schiedea (Caryophyllaceae, which includes closely related species with a striking range of morphological and ecological forms, varying from rainforest vines to woody shrubs growing in desert-like conditions on cliffs. Given the remarkable difference in photosynthetic performance between Schiedea species from different habitats, we focused on the "photosynthetic" Rubisco enzyme, the efficiency of which is known to be a limiting step in plant photosynthesis.We demonstrate that the chloroplast rbcL gene, encoding the large subunit of Rubisco enzyme, evolved under strong positive selection in Schiedea. Adaptive amino acid changes occurred in functionally important regions of Rubisco that interact with Rubisco activase, a chaperone which promotes and maintains the catalytic activity of Rubisco. Interestingly, positive selection acting on the rbcL might have caused favorable cytotypes to spread across several Schiedea species.We report the first evidence for adaptive changes at the DNA and protein sequence level that may have been associated with the evolution of photosynthetic performance and colonization of new habitats during a recent adaptive radiation in an island plant genus. This illustrates how small changes at the molecular level may change ecological species performance and helps us to understand the molecular bases of extremely

  15. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  16. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  17. Adaptive Iterative Soft-Input Soft-Output Parallel Decision-Feedback Detectors for Asynchronous Coded DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2005-01-01

    Full Text Available The optimum and many suboptimum iterative soft-input soft-output (SISO multiuser detectors require a priori information about the multiuser system, such as the users' transmitted signature waveforms, relative delays, as well as the channel impulse response. In this paper, we employ adaptive algorithms in the SISO multiuser detector in order to avoid the need for this a priori information. First, we derive the optimum SISO parallel decision-feedback detector for asynchronous coded DS-CDMA systems. Then, we propose two adaptive versions of this SISO detector, which are based on the normalized least mean square (NLMS and recursive least squares (RLS algorithms. Our SISO adaptive detectors effectively exploit the a priori information of coded symbols, whose soft inputs are obtained from a bank of single-user decoders. Furthermore, we consider how to select practical finite feedforward and feedback filter lengths to obtain a good tradeoff between the performance and computational complexity of the receiver.

  18. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  19. Research on coding and decoding method for digital levels

    Energy Technology Data Exchange (ETDEWEB)

    Tu Lifen; Zhong Sidong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.

  20. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  1. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.

    2001-01-01

    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained....... The paper describes the code format, the stochastic models and the resulting optimised partial safety factors....

  2. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  3. Supporting Dynamic Adaptive Streaming over HTTP in Wireless Meshed Networks using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...

  4. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    Science.gov (United States)

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  5. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  6. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  7. Information preserving coding for multispectral data

    Science.gov (United States)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  8. Multiplexed Spike Coding and Adaptation in the Thalamus

    Directory of Open Access Journals (Sweden)

    Rebecca A. Mease

    2017-05-01

    Full Text Available High-frequency “burst” clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ “multiplexing” to differentially encode low- and high-frequency stimulus features associated with either T-type calcium “low-threshold” or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1 burst size, (2 burst onset time, and (3 precise spike timing within bursts. Strikingly, this latter “intraburst” encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.

  9. Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.

    Science.gov (United States)

    Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David

    2013-12-01

    Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.

  10. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  11. Hyper-heuristics with low level parameter adaptation.

    Science.gov (United States)

    Ren, Zhilei; Jiang, He; Xuan, Jifeng; Luo, Zhongxuan

    2012-01-01

    Recent years have witnessed the great success of hyper-heuristics applying to numerous real-world applications. Hyper-heuristics raise the generality of search methodologies by manipulating a set of low level heuristics (LLHs) to solve problems, and aim to automate the algorithm design process. However, those LLHs are usually parameterized, which may contradict the domain independent motivation of hyper-heuristics. In this paper, we show how to automatically maintain low level parameters (LLPs) using a hyper-heuristic with LLP adaptation (AD-HH), and exemplify the feasibility of AD-HH by adaptively maintaining the LLPs for two hyper-heuristic models. Furthermore, aiming at tackling the search space expansion due to the LLP adaptation, we apply a heuristic space reduction (SAR) mechanism to improve the AD-HH framework. The integration of the LLP adaptation and the SAR mechanism is able to explore the heuristic space more effectively and efficiently. To evaluate the performance of the proposed algorithms, we choose the p-median problem as a case study. The empirical results show that with the adaptation of the LLPs and the SAR mechanism, the proposed algorithms are able to achieve competitive results over the three heterogeneous classes of benchmark instances.

  12. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    Science.gov (United States)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer

  13. Time course of dynamic range adaptation in the auditory nerve

    Science.gov (United States)

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  14. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  15. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    Science.gov (United States)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  16. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  17. The management-retrieval code of nuclear level density sub-library (CENPL-NLD)

    International Nuclear Information System (INIS)

    Ge Zhigang; Su Zongdi; Huang Zhongfu; Dong Liaoyuan

    1995-01-01

    The management-retrieval code of the Nuclear Level Density (NLD) is presented. It contains two retrieval ways: single nucleus (SN) and neutron reaction (NR). The latter contains four kinds of retrieval types. This code not only can retrieve level density parameter and the data related to the level density, but also can calculate the relevant data by using different level density parameters and do comparison of the calculated results with related data in order to help user to select level density parameters

  18. Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer

    International Nuclear Information System (INIS)

    Ball, D.G.; Cheverton, R.D.

    1985-01-01

    The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P

  19. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV

    Directory of Open Access Journals (Sweden)

    Y. Nasser

    2008-01-01

    Full Text Available Single-frequency networks (SFNs for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR mapping (EESM to predict the bit error rate (BER at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.

  20. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  1. Climate Adaptation and Sea Level Rise

    Science.gov (United States)

    EPA supports the development and maintenance of water utility infrastructure across the country. Included in this effort is helping the nation’s water utilities anticipate, plan for, and adapt to risks from flooding, sea level rise, and storm surge.

  2. Adapting to rates versus amounts of climate change: a case of adaptation to sea-level rise

    Science.gov (United States)

    Shayegh, Soheil; Moreno-Cruz, Juan; Caldeira, Ken

    2016-10-01

    Adaptation is the process of adjusting to climate change in order to moderate harm or exploit beneficial opportunities associated with it. Most adaptation strategies are designed to adjust to a new climate state. However, despite our best efforts to curtail greenhouse gas emissions, climate is likely to continue changing far into the future. Here, we show how considering rates of change affects the projected optimal adaptation strategy. We ground our discussion with an example of optimal investment in the face of continued sea-level rise, presenting a quantitative model that illustrates the interplay among physical and economic factors governing coastal development decisions such as rate of sea-level rise, land slope, discount rate, and depreciation rate. This model shows that the determination of optimal investment strategies depends on taking into account future rates of sea-level rise, as well as social and political constraints. This general approach also applies to the development of improved strategies to adapt to ongoing trends in temperature, precipitation, and other climate variables. Adaptation to some amount of change instead of adaptation to ongoing rates of change may produce inaccurate estimates of damages to the social systems and their ability to respond to external pressures.

  3. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  4. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  5. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  6. Adaptive response after low level irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Pelevina, I I; Afanasjev, G G; JaGotlib, V; Tereschenko, D G; Tronov, V A; Serebrjany, A M [Russian Academy of Sciences, Moscow (Russian Federation). Institute of Chemical Physics

    1996-02-01

    The experiments conducted on cultured HeLa (tissue culture) cells revealed that there is a limit of dose above which adaptive response was not observed and a limit of dose below which this response was not induced. The exposure of cells in the territories with elevated radiation background leads to genome instability which results in enhanced radiosensitivity. Investigations on the blood lymphocytes of people living in contaminated regions revealed that adaptive response was more significant in children whereas in adults there was slight increase. Acute irradiation serves as a tool revealing the changes that took place in DNA during chronic low level irradiations after Chernobyl disaster. (author).

  7. Uplink capacity of multi-class IEEE 802.16j relay networks with adaptive modulation and coding

    DEFF Research Database (Denmark)

    Wang, Hua; Xiong, C; Iversen, Villy Bæk

    2009-01-01

    The emerging IEEE 802.16j mobile multi-hop relay (MMR) network is currently being developed to increase the user throughput and extend the service coverage as an enhancement of existing 802.16e standard. In 802.16j, the intermediate relay stations (RSs) help the base station (BS) communicate...... with those mobile stations (MSs) that are either too far away from the BS or placed in an area where direct communication with BS experiences unsatisfactory level of service. In this paper, we investigate the uplink Erlang capacity of a two-hop 802.16j relay system supporting both voice and data traffics...... with adaptive modulation and coding (AMC) scheme applied in the physical layer. We first develop analytical models to calculate the blocking probability in the access zone and the outage probability in the relay zone, respectively. Then a joint algorithm is proposed to determine the bandwidth distribution...

  8. Adaptation in Coding by Large Populations of Neurons in the Retina

    Science.gov (United States)

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent

  9. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  10. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    International Nuclear Information System (INIS)

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  11. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  12. An approach enabling adaptive FEC for OFDM in fiber-VLLC system

    Science.gov (United States)

    Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin

    2017-12-01

    In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.

  13. Improvement of level-1 PSA computer code package

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  14. Improvement of level-1 PSA computer code package

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The improvement of level-1 PSA Computer Codes' is divided into two main activities : 1) improvement of level-1 PSA methodology, 2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs

  15. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  16. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  17. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  18. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Science.gov (United States)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  19. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs

  20. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  1. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Science.gov (United States)

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  2. Adaptation improves face trustworthiness discrimination

    Directory of Open Access Journals (Sweden)

    Bruce D Keefe

    2013-06-01

    Full Text Available Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces respectively. In the first experiment, just-noticeable differences (JNDs were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.

  3. GRABGAM: A Gamma Analysis Code for Ultra-Low-Level HPGe SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Winn, W.G.

    1999-07-28

    The GRABGAM code has been developed for analysis of ultra-low-level HPGe gamma spectra. The code employs three different size filters for the peak search, where the largest filter provides best sensitivity for identifying low-level peaks and the smallest filter has the best resolution for distinguishing peaks within a multiplet. GRABGAM basically generates an integral probability F-function for each singlet or multiplet peak analysis, bypassing the usual peak fitting analysis for a differential f-function probability model. Because F is defined by the peak data, statistical limitations for peak fitting are avoided; however, the F-function does provide generic values for peak centroid, full width at half maximum, and tail that are consistent with a Gaussian formalism. GRABGAM has successfully analyzed over 10,000 customer samples, and it interfaces with a variety of supplementary codes for deriving detector efficiencies, backgrounds, and quality checks.

  4. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  5. Rate Adaptive OFDMA Communication Systems

    International Nuclear Information System (INIS)

    Abdelhakim, M.M.M.

    2009-01-01

    Due to the varying nature of the wireless channels, adapting the transmission parameters, such as code rate, modulation order and power, in response to the channel variations provides a significant improvement in the system performance. In the OFDM systems, Per-Frame adaptation (PFA) can be employed where the transmission variables are fixed over a given frame and may change from one frame to the other. Subband (tile) loading offers more degrees of adaptation such that each group of carriers (subband) uses the same transmission parameters and different subbands may use different parameters. Changing the code rate for each tile in the same frame, results in transmitting multiple codewords (MCWs) for a single frame. In this thesis a scheme is proposed for adaptively changing the code rate of coded OFDMA systems via changing the puncturing rate within a single codeword (SCW). In the proposed structure, the data is encoded with the lowest available code rate then it is divided among the different tiles where it is punctured adaptively based on some measure of the channel quality for each tile. The proposed scheme is compared against using multiple codewords (MCWs) where the different code rates for the tiles are obtained using separate encoding processes. For bit interleaved coded modulation architecture two novel interleaving methods are proposed, namely the puncturing dependant interleaver (PDI) and interleaved puncturing (IntP), which provide larger interleaving depth. In the PDI method the coded bits with the same rate over different tiles are grouped for interleaving. In IntP structure the interleaving is performed prior to puncturing. The performance of the adaptive puncturing technique is investigated under constant bit rate constraint and variable bit rate. Two different adaptive modulation and coding (AMC) selection methods are examined for variable bit rate adaptive system. The first is a recursive scheme that operates directly on the SNR whereas the second

  6. Confidence level in the calculations of HCDA consequences using large codes

    International Nuclear Information System (INIS)

    Nguyen, D.H.; Wilburn, N.P.

    1979-01-01

    The probabilistic approach to nuclear reactor safety is playing an increasingly significant role. For the liquid-metal fast breeder reactor (LMFBR) in particular, the ultimate application of this approach could be to determine the probability of achieving the goal of a specific line-of-assurance (LOA). Meanwhile a more pressing problem is one of quantifying the uncertainty in a calculated consequence for hypothetical core disruptive accident (HCDA) using large codes. Such uncertainty arises from imperfect modeling of phenomenology and/or from inaccuracy in input data. A method is presented to determine the confidence level in consequences calculated by a large computer code due to the known uncertainties in input invariables. A particular application was made to the initial time of pin failure in a transient overpower HCDA calculated by the code MELT-IIIA in order to demonstrate the method. A probability distribution function (pdf) for the time of failure was first constructed, then the confidence level for predicting this failure parameter within a desired range was determined

  7. Psacoin level 1A intercomparison probabilistic system assessment code (PSAC) user group

    International Nuclear Information System (INIS)

    Nies, A.; Laurens, J.M.; Galson, D.A.; Webster, S.

    1990-01-01

    This report describes an international code intercomparison exercise conducted by the NEA Probabilistic System Assessment Code (PSAC) User Group. The PSACOIN Level 1A exercise is the third of a series designed to contribute to the verification of probabilistic codes that may be used in assessing the safety of radioactive waste disposal systems or concepts. Level 1A is based on a more realistic system model than that used in the two previous exercises, and involves deep geological disposal concepts with a relatively complex structure of the repository vault. The report compares results and draws conclusions with regard to the use of different modelling approaches and the possible importance to safety of various processes within and around a deep geological repository. In particular, the relative significance of model uncertainty and data variability is discussed

  8. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  9. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  10. Multi-level governance and adaptive capacity in West Africa

    Directory of Open Access Journals (Sweden)

    Maria Brockhaus

    2012-08-01

    Full Text Available In most regions in West Africa, livelihoods depend heavily on forest ecosystem goods and services, often in interplay with agricultural and livestock production systems. Numerous drivers of change are creating a range of fundamental economic, ecological, social and political challenges for the governance of forest commons. Climate change and its impacts on countries’ and regions’ development add a new dimension to an already challenging situation. Governance systems are challenged to set a frame for formulating, financing and implementing adaptation strategies at multiple layers, often in a context of ongoing institutional changes such as decentralisation. A deeper understanding of actors, institutions and networks is needed to overcome barriers in socio-ecological systems to adaptation and enable or enhance adaptive capacity. In this paper, we explore the relationship between governance and adaptive capacity, and characterise and assess the effects of a set of variables and indicators related to two core variables: Institutional flexibility, and individual and organisational understandings and perceptions. We present a comparative analysis with multiple methods based on a number of case studies undertaken at different levels in Burkina Faso and Mali. One of the key findings indicates the importance and influence of discourses and narratives, and how they affect adaptive capacity at different levels. Revealing the ideological character of discourses can help to enable adaptive capacity, as it would break the influence of the actors that employ these narratives to pursuit their own interests.

  11. Hybrid Strategies for Link Adaptation Exploiting Several Degrees of Freedom in OFDM Based Broadband Wireless Systems

    DEFF Research Database (Denmark)

    Das, Suvra S.; Rahman, Muhammad Imadur; Wang, Yuanye

    2007-01-01

    In orthogonal frequency division multiplexing (OFDM) systems, there are several degrees of freedom in time and frequency domain, such as, sub-band size, forward error control coding (FEC) rate, modulation order, power level, modulation adaptation interval, coding rate adaptation interval and powe...... of the link parameters based on the channel conditions would lead to highly complex systems with high overhead. Hybrid strategies to vary the adaptation rates to tradeoff achievable efficiency and complexity are presented in this work....

  12. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    Directory of Open Access Journals (Sweden)

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  13. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800 (United States); Almgren, Ann S.; Zhang, Weiqun [Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  14. Coastal Adaptation Planning for Sea Level Rise and Extremes: A Global Model for Adaptation Decision-making at the Local Level Given Uncertain Climate Projections

    Science.gov (United States)

    Turner, D.

    2014-12-01

    Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.

  15. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  16. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  17. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  18. Ground-based research on vestibular adaptation to g-level transitions

    NARCIS (Netherlands)

    Groen, Eric L.; Nooij, Suzanne A E; Bos, Jelte E.

    2008-01-01

    At TNO research is ongoing on neuro-vestibular adaptation to altered G-levels. It is well-known that during the first days in weightlessness 50-80% of all astronauts suffer from the Space Adaptation Syndrome (SAS), which involves space motion sickness, spatial disorientation and motion illusions.

  19. Modeling for deformable mirrors and the adaptive optics optimization program

    International Nuclear Information System (INIS)

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-01-01

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language

  20. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  1. Cities and Sea Level Rise: A Roadmap for Flood Hazard Adaptation

    Science.gov (United States)

    Horn, Diane; Cousins, Ann

    2016-04-01

    Coastal cities will face a range of increasingly severe challenges as sea level rises, and adaptation to future flood risk will require more than structural defences. Many cities will not be able to rely solely on engineering structures for protection and will need to develop a suite of policy responses to increase their resilience to impacts of rising sea level. The tools to promote flood risk adaptation are already within the capacity of most cities, with an assortment of policy tools available to address other land-use problems which can be refashioned and used to adapt to sea level rise. This study reviews approaches for urban adaptation through detailed analyses of case studies of cities which have developed flood adaptation strategies that combine structural defences with innovative approaches to living with flood risk. The aim of the overall project is to produce a 'roadmap' to guide practitioners through the process of analysing coastal flood risk in urban areas. Methodologies and tools to estimate vulnerability to coastal flooding, damages suffered, and the assessment of flood defences and adaptation measures are complemented with a discussion on the essential impact that local policy has on the treatment of coastal flooding and the constraints and opportunities that result from the specific country or locality characteristics in relation to economic, political, social and environmental priorities, which are likely to dictate the approach to coastal flooding and the actions proposed. Case studies of adaptation strategies used by Rotterdam, Bristol, Ho Chi Minh City and Norfolk, Virginia, are used to draw out a range of good practice elements that promote effective adaptation to sea level rise. These can be grouped into risk reduction, governance issues, and insurance, and can be used to provide examples of how other cities could adopt and implement flood adaptation strategies from a relatively limited starting position. Most cities will neither be able to

  2. The adaptive response of E. coli to low levels of alkylating agent

    International Nuclear Information System (INIS)

    Jeggo, P.; Defais, M.; Samson, L.; Schendel, P.

    1978-01-01

    In an attempt to characterise which gene products may be involved in the repair system induced in E. coli by growth on low levels of alkylating agent (the adaptive response) we have analysed mutants deficient in other known pathways of DNA repair for the ability to adapt to MNNG. Adaptive resistance to the killing effects of MNNG seems to require a functional DNA polymerase I whereas resistance to the mutagenic effects can occur in polymerase I deficient strains; similarly killing adaptation could not be observed in a dam3 mutant, which was nonetheless able to show mutational adaptation. These results suggest that these two parts of the adaptive response must, at least to some extent, be separable. Both adaptive responses can be seen in the absence of uvrD + uvrE + -dependent mismatch repair, DNA polymerase II activity, or recF-mediated recombination and they are not affected by decreased levels of adenyl cyclase. The data presented support our earlier conclusion that adaptive resistance to the killing and mutagenic effect of MNNG is the result of previously uncharacterised repair pathways. (orig.) [de

  3. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Science.gov (United States)

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  4. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  5. An adaptation model for trabecular bone at different mechanical levels

    Directory of Open Access Journals (Sweden)

    Lv Linwei

    2010-07-01

    Full Text Available Abstract Background Bone has the ability to adapt to mechanical usage or other biophysical stimuli in terms of its mass and architecture, indicating that a certain mechanism exists for monitoring mechanical usage and controlling the bone's adaptation behaviors. There are four zones describing different bone adaptation behaviors: the disuse, adaptation, overload, and pathologic overload zones. In different zones, the changes of bone mass, as calculated by the difference between the amount of bone formed and what is resorbed, should be different. Methods An adaptation model for the trabecular bone at different mechanical levels was presented in this study based on a number of experimental observations and numerical algorithms in the literature. In the proposed model, the amount of bone formation and the probability of bone remodeling activation were proposed in accordance with the mechanical levels. Seven numerical simulation cases under different mechanical conditions were analyzed as examples by incorporating the adaptation model presented in this paper with the finite element method. Results The proposed bone adaptation model describes the well-known bone adaptation behaviors in different zones. The bone mass and architecture of the bone tissue within the adaptation zone almost remained unchanged. Although the probability of osteoclastic activation is enhanced in the overload zone, the potential of osteoblasts to form bones compensate for the osteoclastic resorption, eventually strengthening the bones. In the disuse zone, the disuse-mode remodeling removes bone tissue in disuse zone. Conclusions The study seeks to provide better understanding of the relationships between bone morphology and the mechanical, as well as biological environments. Furthermore, this paper provides a computational model and methodology for the numerical simulation of changes of bone structural morphology that are caused by changes of mechanical and biological

  6. Impact of the Level of State Tax Code Progressivity on Children's Health Outcomes

    Science.gov (United States)

    Granruth, Laura Brierton; Shields, Joseph J.

    2011-01-01

    This research study examines the impact of the level of state tax code progressivity on selected children's health outcomes. Specifically, it examines the degree to which a state's tax code ranking along the progressive-regressive continuum relates to percentage of low birthweight babies, infant and child mortality rates, and percentage of…

  7. National-level progress on adaptation

    NARCIS (Netherlands)

    Lesnikowski, A.; Ford-Robertson, J.; Biesbroek, G.R.; Berrang-Ford, L.; Heymann, S.J.

    2016-01-01

    It is increasingly evident that adaptation will figure prominently in the post-2015 United Nations climate change agreement. As adaptation obligations under the United Nations Framework Convention on Climate Change evolve, more rigorous approaches to measuring adaptation progress among parties will

  8. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed

  9. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shidi Yu

    2018-05-01

    Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that

  10. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  11. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  12. Adaptive Combined Source and Channel Decoding with Modulation ...

    African Journals Online (AJOL)

    In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ...

  13. Adaptive Reference Levels in a Level-Crossing Analog-to-Digital Converter

    Directory of Open Access Journals (Sweden)

    Andrew C. Singer

    2008-11-01

    Full Text Available Level-crossing analog-to-digital converters (LC ADCs have been considered in the literature and have been shown to efficiently sample certain classes of signals. One important aspect of their implementation is the placement of reference levels in the converter. The levels need to be appropriately located within the input dynamic range, in order to obtain samples efficiently. In this paper, we study optimization of the performance of such an LC ADC by providing several sequential algorithms that adaptively update the ADC reference levels. The accompanying performance analysis and simulation results show that as the signal length grows, the performance of the sequential algorithms asymptotically approaches that of the best choice that could only have been chosen in hindsight within a family of possible schemes.

  14. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  15. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  16. Adapting Canada's northern infrastructure to climate change: the role of codes and standards

    International Nuclear Information System (INIS)

    Steenhof, P.

    2009-01-01

    This report provides the results of a research project that investigated the use of codes and standards in terms of their potential for fostering adaptation to the future impacts of climate change on built infrastructure in Canada's north. This involved a literature review, undertaking key informant interviews, and a workshop where key stakeholders came together to dialogue on the challenges facing built infrastructure in the north as a result of climate change and the role of codes and standards to help mitigate climate change risk. In this article, attention is given to the topic area of climate data and information requirements related to climate and climate change. This was an important focal area that was identified through this broader research effort since adequate data is essential in allowing codes and standards to meet their ultimate policy objective. A number of priorities have been identified specific to data and information needs in the context of the research topic investigated: There is a need to include northerners in developing the climate and permafrost data required for codes and standards so that these reflect the unique geographical, economic, and cultural realities and variability of the north; Efforts should be undertaken to realign climate design values so that they reflect both present and future risks; There is a need for better information on the rate and extent of permafrost degradation in the north; and, There is a need to improve monitoring of the rate of climate change in the Arctic. (author)

  17. Adaptive Mesh Refinement in CTH

    International Nuclear Information System (INIS)

    Crawford, David

    1999-01-01

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems

  18. Cities and Sea Level Rise: A Roadmap for Flood Hazard Adaptation

    Science.gov (United States)

    Horn, D. P.; Cousins, A.

    2015-12-01

    Coastal cities will face a range of increasingly severe challenges as sea level rises, and adaptation to future flood risk will require more than structural defences. Many cities will not be able to rely solely on engineering structures for protection and will need to develop a suite of policy responses to increase their resilience to impacts of rising sea level. Local governments generally maintain day-to-day responsibility and control over the use of the vast majority of property at risk of flooding, and the tools to promote flood risk adaptation are already within the capacity of most cities. Policy tools available to address other land-use problems can be refashioned and used to adapt to sea level rise. This study reviews approaches for urban adaptation through case studies of cities which have developed flood adaptation strategies that combine structural defences with innovative approaches to living with flood risk. The aim of the overall project is to produce a 'roadmap' to guide practitioners through the process of analysing coastal flood risk in urban areas. Technical knowledge of flood risk reduction measures is complemented with a consideration of the essential impact that local policy has on the treatment of coastal flooding and the constraints and opportunities that result from the specific country or locality characteristics in relation to economic, political, social and environmental priorities, which are likely to dictate the approach to coastal flooding and the actions proposed. Detailed analyses of the adaptation strategies used by Rotterdam (Netherlands), Bristol (UK), and Norfolk (Virginia) are used to draw out a range of good practice elements that promote effective adaptation to sea level rise. These can be grouped into risk reduction, governance issues, and insurance, and can be used to provide examples of how other cities could adopt and implement flood adaptation strategies from a relatively limited starting position. Most cities will

  19. Recommendations for codes and standards to be used for design and fabrication of high level waste canister

    International Nuclear Information System (INIS)

    Bermingham, A.J.; Booker, R.J.; Booth, H.R.; Ruehle, W.G.; Shevekov, S.; Silvester, A.G.; Tagart, S.W.; Thomas, J.A.; West, R.G.

    1978-01-01

    This study identifies codes, standards, and regulatory requirements for developing design criteria for high-level waste (HLW) canisters for commercial operation. It has been determined that the canister should be designed as a pressure vessel without provision for any overpressure protection type devices. It is recommended that the HLW canister be designed and fabricated to the requirements of the ASME Section III Code, Division 1 rules, for Code Class 3 components. Identification of other applicable industry and regulatory guides and standards are provided in this report. Requirements for the Design Specification are found in the ASME Section III Code. It is recommended that design verification be conducted principally with prototype testing which will encompass normal and accident service conditions during all phases of the canister life. Adequacy of existing quality assurance and licensing standards for the canister was investigated. One of the recommendations derived from this study is a requirement that the canister be N stamped. In addition, acceptance standards for the HLW waste should be established and the waste qualified to those standards before the canister is sealed. A preliminary investigation of use of an overpack for the canister has been made, and it is concluded that the use of an overpack, as an integral part of overall canister design, is undesirable, both from a design and economics standpoint. However, use of shipping cask liners and overpack type containers at the Federal repository may make the canister and HLW management safer and more cost effective. There are several possible concepts for canister closure design. These concepts can be adapted to the canister with or without an overpack. A remote seal weld closure is considered to be one of the most suitable closure methods; however, mechanical seals should also be investigated

  20. The Calculation of Flooding Level using CFX Code

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Kim, Keon Yeop; Lee, Hyung Ho

    2015-01-01

    The plant design should consider internal flooding by postulated pipe ruptures, component failures, actuation of spray systems, and improper system alignment. The flooding causes failure of safety-related equipment and affects the integrity of the structure. The safety-related equipment should be installed above the flood level for protection against flooding effects. Conservative estimates of the flood level are important when a DBA occurs. The flooding level can be calculated simply applying Bernoulli's equation. However, in this study, a realistic calculation is performed with ANSYS CFX code. In calculation with CFX, air-core vortex phenomena, and turbulent flow can be simulated, which cannot be calculated analytically. The flooding level is evaluated by analytical calculation and CFX analysis for an assumed condition. The flood level is calculated as 0.71m and 1.1m analytically and with CFX simulation, respectively. Comparing the analytical calculation and simulation, they are similar, but the analytical calculation is not conservative. There are many factors reducing the drainage capacity such as air-core vortex, intake of air, and turbulent flow. Therefore, in case of flood level evaluation by analytical calculation, a sufficient safety margin should be considered

  1. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  2. Adaptation or Resistance: a classification of responses to sea-level rise

    Science.gov (United States)

    Cooper, J. A.

    2016-02-01

    Societal responses to sea level rise and associated coastal change are apparently diverse in nature and motivation. Most are commonly referred to as 'adaptation'. Based on a review of current practice, however, it is argued that many of these responses do not involve adaptation, but are rather resisting change. There are several instances where formerly adaptive initiatives involving human adaptability are being replaced by initiatives that resist change. A classification is presented that recognises a continuum of responses ranging from adaptation to resistance, depending upon the willingness to change human activities to accommodate environmental change. In many cases climate change adaptation resources are being used for projects that are purely resistant and which foreclose future adaptation options. It is argued that a more concise definition of adaptation is needed if coastal management is to move beyond the current position of holding the shoreline, other tah n in a few showcase examples.

  3. Mercure IV code application to the external dose computation from low and medium level wastes

    International Nuclear Information System (INIS)

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  4. Subband coding of digital audio signals without loss of quality

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Breeuwer, Marcel; van de Waal, Robbert

    1989-01-01

    A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been

  5. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  6. Understanding extreme sea levels for coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Hinkel, J.; Dangendorf, S.; Slangen, A.

    2016-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels, because increasing damage due to extreme events, such as storm surges and tropical cyclones, is one of the major consequences of sea level rise and climate change. In fact, the IPCC has highlighted in its AR4 report that "societal impacts of sea level change primarily occur via the extreme levels rather than as a direct consequence of mean sea level changes". Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future mean sea level; different scenarios were developed with process-based or semi-empirical models and used for coastal impact assessments at various spatial scales to guide coastal management and adaptation efforts. The uncertainties in future sea level rise are typically accounted for by analyzing the impacts associated with a range of scenarios leading to a vertical displacement of the distribution of extreme sea-levels. And indeed most regional and global studies find little or no evidence for changes in storminess with climate change, although there is still low confidence in the results. However, and much more importantly, there is still a limited understanding of present-day extreme sea-levels which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of extreme sea-levels. The bias of these models varies spatially and can reach values much larger than the expected sea level rise; but it can be accounted for in most regions making use of in-situ measurements; (2) Statistical models used for determining present-day extreme sea-level exceedance probabilities. There is no universally accepted approach to obtain such values for flood risk assessments and while substantial research has explored inter-model uncertainties for mean sea level, we explore here, for the first time, inter

  7. QR-codes as a tool to increase physical activity level among school children during class hours

    DEFF Research Database (Denmark)

    Christensen, Jeanette Reffstrup; Kristensen, Allan; Bredahl, Thomas Viskum Gjelstrup

    the students physical activity level during class hours. Methods: A before-after study was used to examine 12 students physical activity level, measured with pedometers for six lessons. Three lessons of traditional teaching and three lessons, where QR-codes were used to make orienteering in school area...... as old fashioned. The students also felt positive about being physically active in teaching. Discussion and conclusion: QR-codes as a tool for teaching are usable for making students more physically active in teaching. The students were exited for using QR-codes and they experienced a good motivation......QR-codes as a tool to increase physical activity level among school children during class hours Introduction: Danish students are no longer fulfilling recommendations for everyday physical activity. Since August 2014, Danish students in public schools are therefore required to be physically active...

  8. Operationalizing analysis of micro-level climate change vulnerability and adaptive capacity

    DEFF Research Database (Denmark)

    Jiao, Xi; Moinuddin, Hasan

    2016-01-01

    This paper explores vulnerability and adaptive capacity of rural communities in Southern Laos, where households are highly dependent on climate-sensitive natural resources and vulnerable to seasonal weather fluctuations. The speed and magnitude of climate-induced changes may seriously challenge...... their ability to adapt. Participatory group discussions and 271 household surveys in three villages highlight the current level of vulnerability and adaptive capacity towards climatic variability and risks. This paper visualizes three dimensions of the vulnerability framework at two levels using the Community...... Climate Vulnerability Index and household climate vulnerability cube. Results show that not only poor households are most at risk from climate change challenges, but also those better-off households highly dependent on specialized agricultural production are locally exposed to climate change risks...

  9. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  10. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira

    2016-07-28

    One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.

  11. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  12. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  13. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    International Nuclear Information System (INIS)

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)

  14. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  15. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  16. Gene expression and adaptive noncoding changes during human evolution.

    Science.gov (United States)

    Babbitt, Courtney C; Haygood, Ralph; Nielsen, William J; Wray, Gregory A

    2017-06-05

    Despite evidence for adaptive changes in both gene expression and non-protein-coding, putatively regulatory regions of the genome during human evolution, the relationship between gene expression and adaptive changes in cis-regulatory regions remains unclear. Here we present new measurements of gene expression in five tissues of humans and chimpanzees, and use them to assess this relationship. We then compare our results with previous studies of adaptive noncoding changes, analyzing correlations at the level of gene ontology groups, in order to gain statistical power to detect correlations. Consistent with previous studies, we find little correlation between gene expression and adaptive noncoding changes at the level of individual genes; however, we do find significant correlations at the level of biological function ontology groups. The types of function include processes regulated by specific transcription factors, responses to genetic or chemical perturbations, and differentiation of cell types within the immune system. Among functional categories co-enriched with both differential expression and noncoding adaptation, prominent themes include cancer, particularly epithelial cancers, and neural development and function.

  17. Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot

    Directory of Open Access Journals (Sweden)

    Goldstein J Scott

    2002-01-01

    Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.

  18. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  19. The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, N.; Adger, W.N.; Kelly, P.M. [University of East Anglia, Norwich (United Kingdom). School of Environmental Sciences

    2005-07-01

    We present a set of indicators of vulnerability and capacity to adapt to climate variability, and by extension climate change, derived using a novel empirical analysis of data aggregated at the national level on a decadal timescale. The analysis is based on a conceptual framework in which risk is viewed in terms of outcome, and is a function of physically defined climate hazards and socially constructed vulnerability. Climate outcomes are represented by mortality from climate-related disasters, using the emergency events database data set, statistical relationships between mortality and a shortlist of potential proxies for vulnerability are used to identify key vulnerability indicators. We find that 11 key indicators exhibit a strong relationship with decadally aggregated mortality associated with climate-related disasters. Validation of indicators, relationships between vulnerability and adaptive capacity, and the sensitivity of subsequent vulnerability assessments to different sets of weightings are explored using expert judgement data, collected through a focus group exercise. The data are used to provide a robust assessment of vulnerability to climate-related mortality at the national level, and represent an entry point to more detailed explorations of vulnerability and adaptive capacity. They indicate that the most vulnerable nations are those situated in sub-Saharan Africa and those that have recently experienced conflict. Adaptive capacity - one element of vulnerability - is associated predominantly with governance, civil and political rights, and literacy. (author)

  20. The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation

    International Nuclear Information System (INIS)

    Brooks, N.; Adger, W.N.; Kelly, P.M.

    2005-01-01

    We present a set of indicators of vulnerability and capacity to adapt to climate variability, and by extension climate change, derived using a novel empirical analysis of data aggregated at the national level on a decadal timescale. The analysis is based on a conceptual framework in which risk is viewed in terms of outcome, and is a function of physically defined climate hazards and socially constructed vulnerability. Climate outcomes are represented by mortality from climate-related disasters, using the emergency events database data set, statistical relationships between mortality and a shortlist of potential proxies for vulnerability are used to identify key vulnerability indicators. We find that 11 key indicators exhibit a strong relationship with decadally aggregated mortality associated with climate-related disasters. Validation of indicators, relationships between vulnerability and adaptive capacity, and the sensitivity of subsequent vulnerability assessments to different sets of weightings are explored using expert judgement data, collected through a focus group exercise. The data are used to provide a robust assessment of vulnerability to climate-related mortality at the national level, and represent an entry point to more detailed explorations of vulnerability and adaptive capacity. They indicate that the most vulnerable nations are those situated in sub-Saharan Africa and those that have recently experienced conflict. Adaptive capacity - one element of vulnerability - is associated predominantly with governance, civil and political rights, and literacy. (author)

  1. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  2. Computer codes for level 1 probabilistic safety assessment

    International Nuclear Information System (INIS)

    1990-06-01

    Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs

  3. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  4. An Adaptive Coding Scheme For Effective Bandwidth And Power ...

    African Journals Online (AJOL)

    Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.

  5. Code of conduct for scientists (abstract)

    International Nuclear Information System (INIS)

    Khurshid, S.J.

    2011-01-01

    The emergence of advanced technologies in the last three decades and extraordinary progress in our knowledge on the basic Physical, Chemical and Biological properties of living matter has offered tremendous benefits to human beings but simultaneously highlighted the need of higher awareness and responsibility by the scientists of 21 century. Scientist is not born with ethics, nor science is ethically neutral, but there are ethical dimensions to scientific work. There is need to evolve an appropriate Code of Conduct for scientist particularly working in every field of Science. However, while considering the contents, promulgation and adaptation of Codes of Conduct for Scientists, a balance is needed to be maintained between freedom of scientists and at the same time some binding on them in the form of Code of Conducts. The use of good and safe laboratory procedures, whether, codified by law or by common practice must also be considered as part of the moral duties of scientists. It is internationally agreed that a general Code of Conduct can't be formulated for all the scientists universally, but there should be a set of 'building blocks' aimed at establishing the Code of Conduct for Scientists either as individual researcher or responsible for direction, evaluation, monitoring of scientific activities at the institutional or organizational level. (author)

  6. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  7. Self-adaptive phosphor coating technology for wafer-level scale chip packaging

    International Nuclear Information System (INIS)

    Zhou Linsong; Rao Haibo; Wang Wei; Wan Xianlong; Liao Junyuan; Wang Xuemei; Zhou Da; Lei Qiaolin

    2013-01-01

    A new self-adaptive phosphor coating technology has been successfully developed, which adopted a slurry method combined with a self-exposure process. A phosphor suspension in the water-soluble photoresist was applied and exposed to LED blue light itself and developed to form a conformal phosphor coating with self-adaptability to the angular distribution of intensity of blue light and better-performing spatial color uniformity. The self-adaptive phosphor coating technology had been successfully adopted in the wafer surface to realize a wafer-level scale phosphor conformal coating. The first-stage experiments show satisfying results and give an adequate demonstration of the flexibility of self-adaptive coating technology on application of WLSCP. (semiconductor devices)

  8. Approach to evaluating health level and adaptation possibilities in schoolchildren

    Directory of Open Access Journals (Sweden)

    O.V. Andrieieva

    2014-02-01

    Full Text Available Purpose: substantiate the results of theoretical and practical investigations aimed at improving the health of students. Material: the study involved 187 children including 103 boys and 84 girls aged 7-10 years. Results: through a rapid assessment of physical health it was found that pupils of primary school age have an average level of the functional state of the organism, with a minimum resistance to risk factors (chronic non-infective diseases, etc.. For the first time, a technique for determining the level of adaptation and reserve capacity of school students proposed by Ukrainian hygienists was used in physical culture and sports practice. Conclusions: the technique reveals strain in adaptation mechanisms that corresponds to donozological condition. An idea is proposed that Nordic walking, through the positive impact on the body of aerobic mode of energy supply, is able to increase the reserve-adaptive capabilities of primary school students by improvement of their health as well as to solve the problems of health formation and health care in the physical education of youth.

  9. Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications

    International Nuclear Information System (INIS)

    Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso

    2011-01-01

    We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)

  10. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    International Nuclear Information System (INIS)

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites

  11. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. Development of a computer code for low-and intermediate-level radioactive waste disposal safety assessment

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2002-01-01

    A safety assessment code, called SAGE (Safety Assessment Groundwater Evaluation), has been developed to describe post-closure radionuclide releases and potential radiological doses for low- and intermediate-level radioactive waste (LILW) disposal in an engineered vault facility in Korea. The conceptual model implemented in the code is focused on the release of radionuclide from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. The radionuclide transport equations are solved by spatially discretizing the disposal system into a series of compartments. Mass transfer between compartments is by diffusion/dispersion and advection. In all compartments, radionuclides are decayed either as a single-member chain or as multi-member chains. The biosphere is represented as a set of steady-state, radionuclide-specific pathway dose conversion factors that are multiplied by the appropriate release rate from the far field for each pathway. The code has the capability to treat input parameters either deterministically or probabilistically. Parameter input is achieved through a user-friendly Graphical User Interface. An application is presented, which is compared against safety assessment results from the other computer codes, to benchmark the reliability of system-level conceptual modeling of the code

  15. Farm Level Adaptation to Climate Change: The Case of Farmer's in the Ethiopian Highlands

    Science.gov (United States)

    Gebrehiwot, Tagel; van der Veen, Anne

    2013-07-01

    In Ethiopia, climate change and associated risks are expected to have serious consequences for agriculture and food security. This in turn will seriously impact on the welfare of the people, particularly the rural farmers whose main livelihood depends on rain-fed agriculture. The level of impacts will mainly depend on the awareness and the level of adaptation in response to the changing climate. It is thus important to understand the role of the different factors that influence farmers' adaptation to ensure the development of appropriate policy measures and the design of successful development projects. This study examines farmers' perception of change in climatic attributes and the factors that influence farmers' choice of adaptation measures to climate change and variability. The estimated results from the climate change adaptation models indicate that level of education, age and wealth of the head of the household; access to credit and agricultural services; information on climate, and temperature all influence farmers' choices of adaptation. Moreover, lack of information on adaptation measures and lack of finance are seen as the main factors inhibiting adaptation to climate change. These conclusions were obtained with a Multinomial logit model, employing the results from a survey of 400 smallholder farmers in three districts in Tigray, northern Ethiopian.

  16. Climate change vulnerability, adaptation and risk perceptions at farm level in Punjab, Pakistan.

    Science.gov (United States)

    Abid, Muhammad; Schilling, Janpeter; Scheffran, Jürgen; Zulfiqar, Farhad

    2016-03-15

    Pakistan is among the countries highly exposed and vulnerable to climate change. The country has experienced many severe floods, droughts and storms over the last decades. However, little research has focused on the investigation of vulnerability and adaptation to climate-related risks in Pakistan. Against this backdrop, this article investigates the farm level risk perceptions and different aspects of vulnerability to climate change including sensitivity and adaptive capacity at farm level in Pakistan. We interviewed a total of 450 farming households through structured questionnaires in three districts of Punjab province of Pakistan. This study identified a number of climate-related risks perceived by farm households such as extreme temperature events, insect attacks, animal diseases and crop pests. Limited water availability, high levels of poverty and a weak role of local government in providing proper infrastructure were the factors that make farmers more sensitive to climate-related risks. Uncertainty or reduction in crop and livestock yields; changed cropping calendars and water shortage were the major adverse impacts of climate-related risks reported by farmers in the study districts. Better crop production was reported as the only positive effect. Further, this study identified a number of farm level adaptation methods employed by farm households that include changes in crop variety, crop types, planting dates and input mix, depending upon the nature of the climate-related risks. Lack of resources, limited information, lack of finances and institutional support were some constraints that limit the adaptive capacity of farm households. This study also reveals a positive role of cooperation and negative role of conflict in the adaptation process. The study suggests to address the constraints to adaptation and to improve farm level cooperation through extended outreach and distribution of institutional services, particularly climate-specific farm advisory

  17. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  18. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  19. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  20. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  1. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  2. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  3. Flicker Adaptation of Low-Level Cortical Visual Neurons Contributes to Temporal Dilation

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Nonsensory factors, such as increased arousal and attention, have been thought to mediate this flicker-based temporal-dilation aftereffect. In this study, we provide evidence that adaptation of low-level cortical visual…

  4. CESARR V.2 manual: Computer code for the evaluation of surface storage of low and medium level radioactive waste

    International Nuclear Information System (INIS)

    Moya Rivera, J.A.; Bolado Lavin, R.

    1997-01-01

    CESARR (Code for the safety evaluation of low and medium level radioactive waste storage). This code was developed for the safety probabilistic evaluations in the facilities of low-and medium level radioactive waste storage

  5. A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level

    NARCIS (Netherlands)

    Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano

    2016-01-01

    Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that

  6. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  7. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  8. Adaptive EMG noise reduction in ECG signals using noise level approximation

    Science.gov (United States)

    Marouf, Mohamed; Saranovac, Lazar

    2017-12-01

    In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.

  9. Comparison of energy conservation building codes of Iran, Turkey, Germany, China, ISO 9164 and EN 832

    International Nuclear Information System (INIS)

    Fayaz, Rima; Kari, Behrouz M.

    2009-01-01

    To improve the energy efficiency of buildings via compliance to regulation in Iran, Code No. 19 was devised in 1991. The code lacks high level aims and objectives, addressing the characteristics of Iranian buildings. As a consequence, the code has been revised and is not completely implemented in practice, and still remains inefficient. As with any energy coding system, this code has to identify the right balance between the different energy variables for the Iranian climate and way of life. In order to assist improvements to high level objectives of Code 19, this code is compared with ISO 9164, EN 832, German regulation, TS 825 of Turkey and China's GB 50189 to understand how these have adapted international standards to national features. In order to test the appropriateness of Code 19, five case study buildings in Iran are assessed against Code 19 as well as Turkish standard TS 825 and the results are compared. The results demonstrate that Code 19 is efficient in calculations of building envelope, but it needs improvements in the areas of ventilation, gains from internal and solar sources. The paper concludes by offering suggestions for improving the code.

  10. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  11. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  12. Effect of a care plan based on Roy adaptation model biological dimension on stroke patients' physiologic adaptation level.

    Science.gov (United States)

    Alimohammadi, Nasrollah; Maleki, Bibi; Shahriari, Mohsen; Chitsaz, Ahmad

    2015-01-01

    Stroke is a stressful event with several functional, physical, psychological, social, and economic problems that affect individuals' different living balances. With coping strategies, patients try to control these problems and return to their natural life. The aim of this study is to investigate the effect of a care plan based on Roy adaptation model biological dimension on stroke patients' physiologic adaptation level. This study is a clinical trial in which 50 patients, affected by brain stroke and being admitted in the neurology ward of Kashani and Alzahra hospitals, were randomly assigned to control and study groups in Isfahan in 2013. Roy adaptation model care plan was administered in biological dimension in the form of four sessions and phone call follow-ups for 1 month. The forms related to Roy adaptation model were completed before and after intervention in the two groups. Chi-square test and t-test were used to analyze the data through SPSS 18. There was a significant difference in mean score of adaptation in physiological dimension in the study group after intervention (P adaptation in the patients affected by brain stroke in the study and control groups showed a significant increase in physiological dimension in the study group by 47.30 after intervention (P adaptation model biological dimension care plan can result in an increase in adaptation in patients with stroke in physiological dimension. Nurses can use this model for increasing patients' adaptation.

  13. Farmers’ Perceptions about Adaptation Practices to Climate Change and Barriers to Adaptation: A Micro-Level Study in Ghana

    Directory of Open Access Journals (Sweden)

    Francis Ndamani

    2015-08-01

    Full Text Available This study analyzed the farmer-perceived importance of adaptation practices to climate change and examined the barriers that impede adaptation. Perceptions about causes and effects of long-term changes in climatic variables were also investigated. A total of 100 farmer-households were randomly selected from four communities in the Lawra district of Ghana. Data was collected using semi-structured questionnaires and focus group discussions (FGDs. The results showed that 87% of respondents perceived a decrease in rainfall amount, while 82% perceived an increase in temperature over the past 10 years. The study revealed that adaptation was largely in response to dry spells and droughts (93.2% rather than floods. About 67% of respondents have adjusted their farming activities in response to climate change. Empirical results of the weighted average index analysis showed that farmers ranked improved crop varieties and irrigation as the most important adaptation measures. It also revealed that farmers lacked the capacity to implement the highly ranked adaptation practices. The problem confrontation index analysis showed that unpredictable weather, high cost of farm inputs, limited access to weather information, and lack of water resources were the most critical barriers to adaptation. This analysis of adaptation practices and constraints at farmer level will help facilitate government policy formulation and implementation.

  14. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Directory of Open Access Journals (Sweden)

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  15. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    International Nuclear Information System (INIS)

    Takahashi, Tomoyuki; Takeda, Seiji; Kimura, Hideo

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  16. RBMK-LOCA-Analyses with the ATHLET-Code

    Energy Technology Data Exchange (ETDEWEB)

    Petry, A. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Kurfuerstendamm, Berlin (Germany); Domoradov, A.; Finjakin, A. [Research and Development Institute of Power Engineering, Moscow (Russian Federation)

    1995-09-01

    The scientific technical cooperation between Germany and Russia includes the area of adaptation of several German codes for the Russian-designed RBMK-reactor. One point of this cooperation is the adaptation of the Thermal-Hydraulic code ATHLET (Analyses of the Thermal-Hydraulics of LEaks and Transients), for RBMK-specific safety problems. This paper contains a short description of a RBMK-1000 reactor circuit. Furthermore, the main features of the thermal-hydraulic code ATHLET are presented. The main assumptions for the ATHLET-RBMK model are discussed. As an example for the application, the results of test calculations concerning a guillotine type rupture of a distribution group header are presented and discussed, and the general analysis conditions are described. A comparison with corresponding RELAP-calculations is given. This paper gives an overview on some problems posed and experience by application of Western best-estimate codes for RBMK-calculations.

  17. Adaptive colour contrast coding in the salamander retina efficiently matches natural scene statistics.

    Directory of Open Access Journals (Sweden)

    Genadiy Vasserman

    Full Text Available The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level.

  18. Changes in BOLD and ADC weighted imaging in acute hypoxia during sea-level and altitude adapted states

    DEFF Research Database (Denmark)

    Rostrup, Egill; Larsson, Henrik B.W.; Born, Alfred P.

    2005-01-01

    possible structural changes as measured by diffusion weighted imaging. Eleven healthy sea-level residents were studied after 5 weeks of adaptation to high altitude conditions at Chacaltaya, Bolivia (5260 m). The subjects were studied immediately after return to sea-level in hypoxic and normoxic conditions...... was slightly elevated in high altitude as compared to sea-level adaptation. It is concluded that hypoxia significantly diminishes the BOLD response, and the mechanisms underlying this finding are discussed. Furthermore, altitude adaptation may influence both the magnitude of the activation-related response......, and the examinations repeated 6 months later after re-adaptation to sea-level conditions. The BOLD response, measured at 1.5 T, was severely reduced during acute hypoxia both in the altitude and sea-level adapted states (50% reduction during an average S(a)O(2) of 75%). On average, the BOLD response magnitude was 23...

  19. How to Track Adaptation to Climate Change: A Typology of Approaches for National-Level Application

    Directory of Open Access Journals (Sweden)

    James D. Ford

    2013-09-01

    Full Text Available The need to track climate change adaptation progress is being increasingly recognized but our ability to do the tracking is constrained by the complex nature of adaptation and the absence of measurable outcomes or indicators by which to judge if and how adaptation is occurring. We developed a typology of approaches by which climate change adaptation can be tracked globally at a national level. On the one hand, outcome-based approaches directly measure adaptation progress and effectiveness with reference to avoided climate change impacts. However, given that full exposure to climate change impacts will not happen for decades, alternative approaches focus on developing indicators or proxies by which adaptation can be monitored. These include systematic measures of adaptation readiness, processes undertaken to advance adaptation, policies and programs implemented to adapt, and measures of the impacts of these policies and programs on changing vulnerability. While these approaches employ various methods and data sources, and identify different components of adaptation progress to track at the national level, they all seek to characterize the current status of adaptation by which progress over time can be monitored. However, there are significant challenges to operationalizing these approaches, including an absence of systematically collected data on adaptation actions and outcomes, underlying difficulties of defining what constitutes "adaptation", and a disconnect between the timescale over which adaptation plays out and the practical need for evaluation to inform policy. Given the development of new adaptation funding streams, it is imperative that tools for monitoring progress are developed and validated for identifying trends and gaps in adaptation response.

  20. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  1. An improved method for storing and retrieving tabulated data in a scalar Monte Carlo code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Reynolds, K.H.; Dodds, H.L.; Landers, N.F.; Petrie, L.M.

    1990-01-01

    The KENO-Va code is a production-level criticality safety code used to calculate the k eff of a system. The code is stochastic in nature, using a Monte Carlo algorithm to track individual particles one at a time through the system. The advent of computers with vector processors has generated an interest in improving KENO-Va to take advantage of the potential speed-up associated with these new processors. Unfortunately, the original Monte Carlo algorithm and method of storing and retrieving cross-section data is not adaptable to vector processing. This paper discusses an alternate method for storing and retrieving data that not only is readily vectorizable but also improves the efficiency of the current scalar code

  2. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  3. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  4. Genetic adaptability of durum wheat to salinity level at germination ...

    African Journals Online (AJOL)

    Administrator

    2011-05-23

    May 23, 2011 ... Keys words: Durum wheat, genetic-adaptability, salinity level. ... tolerance of crop proves the first way to overcome the limitation of crops ... Analysis of variance using GLM procedures (SAS, 1990) were used ... Additive, dominance and environmental variance components were ..... Breeding for stability of.

  5. Nevada Administrative Code for Special Education Programs.

    Science.gov (United States)

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  6. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    International Nuclear Information System (INIS)

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler

  7. Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning.

    Science.gov (United States)

    García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor

    2015-01-01

    This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure.

  8. Power Control and Coding Formulation for State Estimation with Wireless Sensors

    DEFF Research Database (Denmark)

    Quevedo, Daniel; Østergaard, Jan; Ahlen, Anders

    2014-01-01

    efficient communication. In this paper, we examine the role of power control and coding for Kalman filtering over wireless correlated channels. Two estimation architectures are considered; initially, the sensors send their measurements directly to a single gateway (GW). Next, wireless relay nodes provide...... additional links. The GW decides on the coding scheme and the transmitter power levels of the wireless nodes. The decision process is carried out online and adapts to varying channel conditions to improve the tradeoff between state estimation accuracy and energy expenditure. In combination with predictive......Technological advances made wireless sensors cheap and reliable enough to be brought into industrial use. A major challenge arises from the fact that wireless channels introduce random packet dropouts. Power control and coding are key enabling technologies in wireless communications to ensure...

  9. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  10. Analyzing extreme sea levels for broad-scale impact and adaptation studies

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.

    2017-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias

  11. Scoping Adaptation Needs for Smallholders in the Brazilian Amazon: A Municipal Level Case Study

    Directory of Open Access Journals (Sweden)

    Osuna Vanesa Rodríguez

    2014-05-01

    Full Text Available Over the past decade, several climate extreme events have caused considerable economic damage and hardship in the Brazilian Amazon region, especially for small-scale producers. Based on household surveys and focus group interviews in the Municipality of Alenquer as well as secondary data analyses and a literature review at the regional level, this study seeks to assess rural small-scale producers’ vulnerability to climate and non-climate related shocks and identify entry points for government action to support adaptation at the local level. In our case study area, small-scale producers with similar wealth, self-sufficiency, and resource use specialisation levels exhibited stark variation in levels of sensitivity and adaptive capacity to climate and nonclimate related shocks. Our findings indicate that this variation is partly driven by cultural, historical, and environmental resource use specialisation strategies and partly by differences in local governance capacity and the level of social organisation. Emerging governmentled initiatives to promote climate change adaptation in the region would benefit from taking these factors into account when designing local implementation strategies and priorities.

  12. The Nudo, Rollo, Melon codes and nodal correlations

    International Nuclear Information System (INIS)

    Perlado, J.M.; Aragones, J.M.; Minguez, E.; Pena, J.

    1975-01-01

    Analysis of nodal calculation and checking results by the reference reactor experimental data. Nudo code description, adapting experimental data to nodal calculations. Rollo, Melon codes as improvement in the cycle life calculations of albedos, mixing parameters and nodal correlations. (author)

  13. Explicit control of adaptive automation under different levels of environmental stress.

    Science.gov (United States)

    Sauer, Jürgen; Kao, Chung-Shan; Wastell, David; Nickel, Peter

    2011-08-01

    This article examines the effectiveness of three different forms of explicit control of adaptive automation under low- and high-stress conditions, operationalised by different levels of noise. In total, 60 participants were assigned to one of three types of automation design (free, prompted and forced choice). They were trained for 4 h on a highly automated simulation of a process control environment, called AutoCAMS. This was followed by a 4-h testing session under noise exposure and quiet conditions. Measures of performance, psychophysiology and subjective reactions were taken. The results showed that all three modes of explicit control of adaptive automation modes were able to attenuate the negative effects of noise. This was partly due to the fact that operators opted for higher levels of automation under noise. It also emerged that forced choice showed marginal advantages over the two other automation modes. Statement of Relevance: This work is relevant to the design of adaptive automation since it emphasises the need to consider the impact of work-related stressors during task completion. During the presence of stressors, different forms of operator support through automation may be required than under more favourable working conditions.

  14. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  15. Quantum computing with Majorana fermion codes

    Science.gov (United States)

    Litinski, Daniel; von Oppen, Felix

    2018-05-01

    We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.

  16. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    Science.gov (United States)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  17. Design strategies for irregularly adapting parallel applications

    International Nuclear Information System (INIS)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal

    2000-01-01

    Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability

  18. An overview of the geochemical code MINTEQ: Applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Peterson, S.R.; Opitz, B.E.; Graham, M.J.; Eary, L.E.

    1987-03-01

    The MINTEQ geochemical computer code, developed at the Pacific Northwest Laboratory (PNL), integrates many of the capabilities of its two immediate predecessors, MINEQL and WATEQ3. The MINTEQ code will be used in the Special Waste Form Lysimeters-Arid program to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments of the interaction of ground water with solidified low-level wastes. The code can calculate ion speciation/solubilitya, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial, solidified low-level wastes. The wastes being evaluated include power-reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code was upgraded preparatory to performing the geochemical modeling. Thermodynamic data for solid phases and aqueous species containing Sb, Ce, Cs, or Co were added to the MINTEQ database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the waste forms predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partial field validation of the geochemical model

  19. Adaptive Relay Activation in the Network Coding Protocols

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  20. Improvements to SOIL: An Eulerian hydrodynamics code

    International Nuclear Information System (INIS)

    Davis, C.G.

    1988-04-01

    Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs

  1. Mistranslation: from adaptations to applications.

    Science.gov (United States)

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Adaptive mesh refinement in titanium

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  3. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    Science.gov (United States)

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the

  4. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  5. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

    NARCIS (Netherlands)

    S.M. Bohte (Sander)

    2012-01-01

    htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural

  6. Integrated assessment of farm level adaptation to climate change in agriculture

    NARCIS (Netherlands)

    Mandryk, M.

    2016-01-01

    The findings of the thesis allowed assessing plausible futures of agriculture in Flevoland around 2050 with insights in effective adaptation to climate change at different levels. Besides empirical findings, this thesis contributed methodologically to the portfolio of climate change impact and

  7. Level 1 Processing of MODIS Direct Broadcast Data From Terra

    Science.gov (United States)

    Lynnes, Christopher; Smith, Peter; Shotland, Larry; El-Ghazawi, Tarek; Zhu, Ming

    2000-01-01

    In February 2000, an effort was begun to adapt the Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1 production software to process direct broadcast data. Three Level 1 algorithms have been adapted and packaged for release: Level 1A converts raw (level 0) data into Hierarchical Data Format (HDF), unpacking packets into scans; Geolocation computes geographic information for the data points in the Level 1A; and the Level 1B computes geolocated, calibrated radiances from the Level 1A and Geolocation products. One useful aspect of adapting the production software is the ability to incorporate enhancements contributed by the MODIS Science Team. We have therefore tried to limit changes to the software. However, in order to process the data immediately on receipt, we have taken advantage of a branch in the geolocation software that reads orbit and altitude information from the packets themselves, rather than external ancillary files used in standard production. We have also verified that the algorithms can be run with smaller time increments (2.5 minutes) than the five-minute increments used in production. To make the code easier to build and run, we have simplified directories and build scripts. Also, dependencies on a commercial numerics library have been replaced by public domain software. A version of the adapted code has been released for Silicon Graphics machines running lrix. Perhaps owing to its origin in production, the software is rather CPU-intensive. Consequently, a port to Linux is underway, followed by a version to run on PC clusters, with an eventual goal of running in near-real-time (i.e., process a ten-minute pass in ten minutes).

  8. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    International Nuclear Information System (INIS)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on 'The improvement of level-1 PSA computer codes' is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author)

  9. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  10. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  11. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  12. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    Science.gov (United States)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  13. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  14. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  15. Memory of AMR coded speech distorted by packet loss

    OpenAIRE

    Nykänen, Arne; Lindegren, David; Wruck, Louisa; Ljung, Robert; Odelius, Johan; Möller, Sebastian

    2014-01-01

    Previous studies have shown that free recall of spoken word lists is impaired if the speech is presented in background noise, even if the signal-to-noise ratio is kept at a level allowing full word identification. The objective of this study was to examine recall rates for word lists presented in noise and word lists coded by an AMR (Adaptive Multi Rate) telephone codec distorted by packet loss. Twenty subjects performed a word recall test. Word lists consisting of ten words were played to th...

  16. On Sustaining Dynamic Adaptation of Context-Aware Services

    Directory of Open Access Journals (Sweden)

    Boudjemaa Boudaa

    2015-03-01

    Full Text Available The modern human is getting more and more mobile having access to online services by using mobile cutting-edge computational devices. In the last decade, the field of context-aware services had led to emerge several works. However, most of the proposed approaches have not provided clear adaptation strategies in case of unforeseen contexts. Dealing with this last at runtime is also another crucial need that has been ignored in their proposals. This paper aims to propose a generic dynamic adaptation process as a phase in a model-driven development life-cycle for context-aware services using the MAPE-K control loop to meet the runtime adaptation. This process is validated by implementing an illustrative application on FraSCAti platform. The main benefit of the proposed process is to sustain the self-reconfiguration of such services at model and code levels by enabling successive dynamic adaptations depending on the changing context.

  17. Reversible wavelet filter banks with side informationless spatially adaptive low-pass filters

    Science.gov (United States)

    Abhayaratne, Charith

    2011-07-01

    Wavelet transforms that have an adaptive low-pass filter are useful in applications that require the signal singularities, sharp transitions, and image edges to be left intact in the low-pass signal. In scalable image coding, the spatial resolution scalability is achieved by reconstructing the low-pass signal subband, which corresponds to the desired resolution level, and discarding other high-frequency wavelet subbands. In such applications, it is vital to have low-pass subbands that are not affected by smoothing artifacts associated with low-pass filtering. We present the mathematical framework for achieving 1-D wavelet transforms that have a spatially adaptive low-pass filter (SALP) using the prediction-first lifting scheme. The adaptivity decisions are computed using the wavelet coefficients, and no bookkeeping is required for the perfect reconstruction. Then, 2-D wavelet transforms that have a spatially adaptive low-pass filter are designed by extending the 1-D SALP framework. Because the 2-D polyphase decompositions are used in this case, the 2-D adaptivity decisions are made nonseparable as opposed to the separable 2-D realization using 1-D transforms. We present examples using the 2-D 5/3 wavelet transform and their lossless image coding and scalable decoding performances in terms of quality and resolution scalability. The proposed 2-D-SALP scheme results in better performance compared to the existing adaptive update lifting schemes.

  18. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. Advantages of a single-cycle production assay to study cell culture-adaptive mutations of hepatitis C virus

    DEFF Research Database (Denmark)

    Russell, Rodney S; Meunier, Jean-Christophe; Takikawa, Shingo

    2008-01-01

    mutations that were selected during serial passage in Huh-7.5 cells were studied. Recombinant genomes containing all five mutations produced 3-4 logs more infectious virions than did wild type. Neither a coding mutation in NS5A nor a silent mutation in E2 was adaptive, whereas coding mutations in E2, p7......The JFH1 strain of hepatitis C virus (HCV) is unique among HCV isolates, in that the wild-type virus can traverse the entire replication cycle in cultured cells. However, without adaptive mutations, only low levels of infectious virus are produced. In the present study, the effects of five...

  20. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  1. A New Video Coding Algorithm Using 3D-Subband Coding and Lattice Vector Quantization

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.H. [Taejon Junior College, Taejon (Korea, Republic of); Lee, K.Y. [Sung Kyun Kwan University, Suwon (Korea, Republic of)

    1997-12-01

    In this paper, we propose an efficient motion adaptive 3-dimensional (3D) video coding algorithm using 3D subband coding (3D-SBC) and lattice vector quantization (LVQ) for low bit rate. Instead of splitting input video sequences into the fixed number of subbands along the temporal axes, we decompose them into temporal subbands of variable size according to motions in frames. Each spatio-temporally splitted 7 subbands are partitioned by quad tree technique and coded with lattice vector quantization(LVQ). The simulation results show 0.1{approx}4.3dB gain over H.261 in peak signal to noise ratio(PSNR) at low bit rate (64Kbps). (author). 13 refs., 13 figs., 4 tabs.

  2. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  3. Overview of the geochemical code MINTEQ: applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Graham, M.J.; Peterson, S.R.

    1985-09-01

    The MINTEQ geochemical computer code, developed at Pacific Northwest Laboratory, integrates many of the capabilities of its two immediate predecessors, WATEQ3 and MINEQL. MINTEQ can be used to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments or the interaction of ground water with solidified low-level wastes. The code is capable of performing calculations of ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial solidified low-level wastes. The wastes being evaluated include power reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code is being upgraded before the geochemical modeling is performed. Thermodynamic data for cobalt, antimony, cerium, and cesium solid phases and aqueous species are being added to the database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the wastes predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partical field validation of the geochemical model. 28 refs

  4. Using Direct Policy Search to Identify Robust Strategies in Adapting to Uncertain Sea Level Rise and Storm Surge

    Science.gov (United States)

    Garner, G. G.; Keller, K.

    2017-12-01

    Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.

  5. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  6. Code bench-marking for long-term tracking and adaptive algorithms

    OpenAIRE

    Schmidt, Frank; Alexahin, Yuri; Amundson, James; Bartosik, Hannes; Franchetti, Giuliano; Holmes, Jeffrey; Huschauer, Alexander; Kapin, Valery; Oeftiger, Adrian; Stern, Eric; Titze, Malte

    2016-01-01

    At CERN we have ramped up a program to investigate space charge effects in the LHC pre-injectors with high brightness beams and long storage times. This in view of the LIU upgrade project for these accelerators. These studies require massive simulation over large number of turns. To this end we have been looking at all available codes and started collaborations on code development with several laboratories: pyORBIT from SNS, SYNERGIA from Fermilab, MICROMAP from GSI and our in-house MAD-X cod...

  7. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  8. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  9. High-Level Synthesis of DSP Applications Using Adaptive Negative Cycle Detection

    Directory of Open Access Journals (Sweden)

    Nitin Chandrachoodan

    2002-09-01

    Full Text Available The problem of detecting negative weight cycles in a graph is examined in the context of the dynamic graph structures that arise in the process of high level synthesis (HLS. The concept of adaptive negative cycle detection is introduced, in which a graph changes over time and negative cycle detection needs to be done periodically, but not necessarily after every individual change. We present an algorithm for this problem, based on a novel extension of the well-known Bellman-Ford algorithm that allows us to adapt existing cycle information to the modified graph, and show by experiments that our algorithm significantly outperforms previous incremental approaches for dynamic graphs. In terms of applications, the adaptive technique leads to a very fast implementation of Lawlers algorithm for the computation of the maximum cycle mean (MCM of a graph, especially for a certain form of sparse graph. Such sparseness often occurs in practical circuits and systems, as demonstrated, for example, by the ISCAS 89/93 benchmarks. The application of the adaptive technique to design-space exploration (synthesis is also demonstrated by developing automated search techniques for scheduling iterative data-flow graphs.

  10. Simulation codes and the impact of validation/uncertainty requirements

    International Nuclear Information System (INIS)

    Sills, H.E.

    1995-01-01

    Several of the OECD/CSNI members have adapted a proposed methodology for code validation and uncertainty assessment. Although the validation process adapted by members has a high degree of commonality, the uncertainty assessment processes selected are more variable, ranaing from subjective to formal. This paper describes the validation and uncertainty assessment process, the sources of uncertainty, methods of reducing uncertainty, and methods of assessing uncertainty.Examples are presented from the Ontario Hydro application of the validation methodology and uncertainty assessment to the system thermal hydraulics discipline and the TUF (1) system thermal hydraulics code. (author)

  11. Economic levels of thermal resistance for house envelopes: Considerations for a national energy code

    International Nuclear Information System (INIS)

    Swinton, M.C.; Sander, D.M.

    1992-01-01

    A code for energy efficiency in new buildings is being developed by the Standing Committee on Energy Conservation in Buildings. The precursor to the new code used national average energy rates and construction costs to determine economic optimum levels of insulation, and it is believed that this resulted in prescription of sub-optimum insulation levels in any region of Canada where energy or construction costs differ significantly from the average. A new approach for determining optimum levels of thermal insulation is proposed. The analytic techniques use month-by-month energy balances of heat loss and gain; use gain load ratio correlation (GLR) for predicting the fraction of useable free heat; increase confidence in the savings predictions for above grade envelopes; can take into account solar effects on windows; and are compatible with below-grade heat loss analysis techniques in use. A sensitivity analysis was performed to determine whether reasonable variations in house characteristics would cause significant differences in savings predicted. The life cycle costing technique developed will allow the selection of thermal resistances that are commonly met by industry. Environmental energy cost multipliers can be used with the proposed methodology, which could have a minor role in encouraging the next higher level of energy efficiency. 11 refs., 6 figs., 2 tabs

  12. Study on application of adaptive fuzzy control and neural network in the automatic leveling system

    Science.gov (United States)

    Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng

    2015-04-01

    This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.

  13. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  14. Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media

    KAUST Repository

    Chueh, C.C.

    2010-10-01

    An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.

  15. Integrated assessment of adaptation to Climate change in Flevoland at the farm and regional level

    NARCIS (Netherlands)

    Wolf, J.; Mandryk, M.; Kanellopoulos, A.; Oort, van P.A.J.; Schaap, B.F.; Reidsma, P.; Ittersum, van M.K.

    2011-01-01

    A key objective of the AgriAdapt project is to assess climate change impacts on agriculture including adaptation at regional and farm type level in combination with market and technological changes. More specifically, the developed methodologies enable (a) the assessment of impacts, risks and

  16. [Adaptation of self-image level and defense mechanisms in elderly patients with complicated stoma].

    Science.gov (United States)

    Ortiz-Rivas, Miriam Karina; Moreno-Pérez, Norma Elvira; Vega-Macías, Héctor Daniel; Jiménez-González, María de Jesús; Navarro-Elías, María de Guadalupe

    2014-01-01

    Ostomy patients face a number of problems that impact negatively on their personal welfare. The aim of this research is determine the nature and intensity of the relationship between the level of self-concept adaptive mode and the consistent use of coping strategies of older adults with a stoma. Quantitative, correlational and transversal. VIVEROS 03 and CAPS surveys were applied in 3 hospitals in the City of Durango, México. The study included 90 older adults with an intestinal elimination stoma with complications. Kendall's Tau-b coefficient was the non-parametric test used to measure this association. Most older adults analyzed (61.3 < % < 79.9) are not completely adapted to the condition of living with an intestinal stoma. There is also a moderate positive correlation (0,569) between the level of adaptation of the older adults with a stoma and the conscious use of coping strategies. The presence of an intestinal stoma represents a physical and psychological health problem that is reflected in the level of adaptation of the self-image. Elderly people with a stoma use only a small part of defense mechanisms as part of coping process. This limits their ability to face the adversities related to their condition, potentially causing major health complications. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  17. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  18. Development of code PRETOR for stellarator simulation

    International Nuclear Information System (INIS)

    Dies, J.; Fontanet, J.; Fontdecaba, J.M.; Castejon, F.; Alejandre, C.

    1998-01-01

    The Department de Fisica i Enginyeria Nuclear (DFEN) of the UPC has some experience in the development of the transport code PRETOR. This code has been validated with shots of DIII-D, JET and TFTR, it has also been used in the simulation of operational scenarios of ITER fast burnt termination. Recently, the association EURATOM-CIEMAT has started the operation of the TJ-II stellarator. Due to the need of validating the results given by others transport codes applied to stellarators and because all of them made some approximations, as a averaging magnitudes in each magnetic surface, it was thought suitable to adapt the PRETOR code to devices without axial symmetry, like stellarators, which is very suitable for the specific needs of the study of TJ-II. Several modifications are required in PRETOR; the main concerns to the models of: magnetic equilibrium, geometry and transport of energy and particles. In order to solve the complex magnetic equilibrium geometry the powerful numerical code VMEC has been used. This code gives the magnetic surface shape as a Fourier series in terms of the harmonics (m,n). Most of the geometric magnitudes are also obtained from the VMEC results file. The energy and particle transport models will be replaced by other phenomenological models that are better adapted to stellarator simulation. Using the proposed models, it is pretended to reproduce experimental data available from present stellarators, given especial attention to the TJ-II of the association EURATOM-CIEMAT. (Author)

  19. Coding training for medical students: How good is diagnoses coding with ICD-10 by novices?

    Directory of Open Access Journals (Sweden)

    Stausberg, Jürgen

    2005-04-01

    Full Text Available Teaching of knowledge and competence in documentation and coding is an essential part of medical education. Therefore, coding training had been placed within the course of epidemiology, medical biometry, and medical informatics. From this, we can draw conclusions about the quality of coding by novices. One hundred and eighteen students coded diagnoses from 15 nephrological cases in homework. In addition to interrater reliability, validity was calculated by comparison with a reference coding. On the level of terminal codes, 59.3% of the students' results were correct. The completeness was calculated as 58.0%. The results on the chapter level increased up to 91.5% and 87.7% respectively. For the calculation of reliability a new, simple measure was developed that leads to values of 0.46 on the level of terminal codes and 0.87 on the chapter level for interrater reliability. The figures of concordance with the reference coding are quite similar. In contrary, routine data show considerably lower results with 0.34 and 0.63 respectively. Interrater reliability and validity of coding by novices is as good as coding by experts. The missing advantage of experts could be explained by the workload of documentation and a negative attitude to coding on the one hand. On the other hand, coding in a DRG-system is handicapped by a large number of detailed coding rules, which do not end in uniform results but rather lead to wrong and random codes. Anyway, students left the course well prepared for coding.

  20. Development of SAGE, A computer code for safety assessment analyses for Korean Low-Level Radioactive Waste Disposal

    International Nuclear Information System (INIS)

    Zhou, W.; Kozak, Matthew W.; Park, Joowan; Kim, Changlak; Kang, Chulhyung

    2002-01-01

    This paper describes a computer code, called SAGE (Safety Assessment Groundwater Evaluation) to be used for evaluation of the concept for low-level waste disposal in the Republic of Korea (ROK). The conceptual model in the code is focused on releases from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. Doses can be calculated for several biosphere systems including drinking contaminated groundwater, and subsequent contamination of foods, rivers, lakes, or the ocean by that groundwater. The flexibility of the code will permit both generic analyses in support of design and site development activities, and straightforward modification to permit site-specific and design-specific safety assessments of a real facility as progress is made toward implementation of a disposal site. In addition, the code has been written to easily interface with more detailed codes for specific parts of the safety assessment. In this way, the code's capabilities can be significantly expanded as needed. The code has the capability to treat input parameters either deterministic ally or probabilistic ally. Parameter input is achieved through a user-friendly Graphical User Interface.

  1. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  2. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  3. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    Science.gov (United States)

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p code (odds ratio 310.0, p coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  4. Content-Adaptive Packetization and Streaming of Wavelet Video over IP Networks

    Directory of Open Access Journals (Sweden)

    Chien-Peng Ho

    2007-03-01

    Full Text Available This paper presents a framework of content-adaptive packetization scheme for streaming of 3D wavelet-based video content over lossy IP networks. The tradeoff between rate and distortion is controlled by jointly adapting scalable source coding rate and level of forward error correction (FEC protection. A content dependent packetization mechanism with data-interleaving and Reed-Solomon protection for wavelet-based video codecs is proposed to provide unequal error protection. This paper also tries to answer an important question for scalable video streaming systems: given extra bandwidth, should one increase the level of channel protection for the most important packets, or transmit more scalable source data? Experimental results show that the proposed framework achieves good balance between quality of the received video and level of error protection under bandwidth-varying lossy IP networks.

  5. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  6. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author).

  7. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author)

  8. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  9. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  10. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Tomoyuki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  11. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Resilience of Infrastructure Systems to Sea-Level Rise in Coastal Areas: Impacts, Adaptation Measures, and Implementation Challenges

    Directory of Open Access Journals (Sweden)

    Beatriz Azevedo de Almeida

    2016-11-01

    Full Text Available Expansive areas of low elevation in many densely populated coastal areas are at elevated risk of storm surges and flooding due to torrential precipitation, as a result of sea level rise. These phenomena could have catastrophic impacts on coastal communities and result in the destruction of critical infrastructure, disruption of economic activities and salt water contamination of the water supply. The objective of the study presented in this paper was to identify various impacts of sea level rise on civil infrastructures in coastal areas and examine the adaptation measures suggested in the existing literature. To this end, a systemic review of the existing literature was conducted in order to identify a repository of studies addressing sea level rise impacts and adaptation measures in the context of infrastructure systems. The study focused on three infrastructure sectors: water and wastewater, energy, and road transportation. The collected information was then analyzed in order to identify different categories of sea level rise impacts and corresponding adaptation measures. The findings of the study are threefold: (1 the major categories of sea level rise impacts on different infrastructure systems; (2 measures for protection, accommodation, and retreat in response to sea level rise impacts; and (3 challenges related to implementing adaptation measures.

  13. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  14. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  15. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  16. Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-11-01

    Full Text Available In this paper, we propose a rate-compatible (RC parallel concatenated punctured polar (PCPP codes for incremental redundancy hybrid automatic repeat request (IR-HARQ transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.

  17. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    Directory of Open Access Journals (Sweden)

    Claudia V Filomatori

    2017-03-01

    Full Text Available The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs, relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of

  18. Possible impacts of sea level rise on disease transmission and potential adaptation strategies, a review.

    Science.gov (United States)

    Dvorak, Ana C; Solo-Gabriele, Helena M; Galletti, Andrea; Benzecry, Bernardo; Malone, Hannah; Boguszewski, Vicki; Bird, Jason

    2018-04-18

    Sea levels are projected to rise in response to climate change, causing the intrusion of sea water into land. In flat coastal regions, this would generate an increase in shallow water covered areas with limited circulation. This scenario raises a concern about the consequences it could have on human health, specifically the possible impacts on disease transmission. In this review paper we identified three categories of diseases which are associated with water and whose transmission can be affected by sea level rise. These categories include: mosquitoborne diseases, naturalized organisms (Vibrio spp. and toxic algae), and fecal-oral diseases. For each disease category, we propose comprehensive adaptation strategies that would help minimize possible health risks. Finally, the City of Key West, Florida is analyzed as a case study, due to its inherent vulnerability to sea level rise. Current and projected adaptation techniques are discussed as well as the integration of additional recommendations, focused on disease transmission control. Given that sea level rise will likely continue into the future, the promotion and implementation of positive adaptation strategies is necessary to ensure community resilience. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-01-01

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application

  20. The code of ethics for nurses.

    Science.gov (United States)

    Zahedi, F; Sanjari, M; Aala, M; Peymani, M; Aramesh, K; Parsapour, A; Maddah, Ss Bagher; Cheraghi, Ma; Mirzabeigi, Gh; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Nurses are ever-increasingly confronted with complex concerns in their practice. Codes of ethics are fundamental guidance for nursing as many other professions. Although there are authentic international codes of ethics for nurses, the national code would be the additional assistance provided for clinical nurses in their complex roles in care of patients, education, research and management of some parts of health care system in the country. A national code can provide nurses with culturally-adapted guidance and help them to make ethical decisions more closely to the Iranian-Islamic background. Given the general acknowledgement of the need, the National Code of Ethics for Nurses was compiled as a joint project (2009-2011). The Code was approved by the Health Policy Council of the Ministry of Health and Medical Education and communicated to all universities, healthcare centers, hospitals and research centers early in 2011. The focus of this article is on the course of action through which the Code was compiled, amended and approved. The main concepts of the code will be also presented here. No doubt, development of the codes should be considered as an ongoing process. This is an overall responsibility to keep the codes current, updated with the new progresses of science and emerging challenges, and pertinent to the nursing practice.

  1. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    National Research Council Canada - National Science Library

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  2. Differential Service in a Bidirectional Radio-over-Fiber System over a Spectral-Amplitude-Coding OCDMA Network

    Directory of Open Access Journals (Sweden)

    Chao-Chin Yang

    2016-10-01

    Full Text Available A new scheme of radio-over-fiber (RoF network based on spectral-amplitude-coding (SAC optical code division multiple access (OCDMA is herein proposed. Differential service is provided by a power control scheme that classifies users into several classes and assigns each of them with a specific power level. Additionally, the wavelength reuse technique is adapted to support bidirectional transmission and reduce base station (BS cost. Both simulation and numerical results show that significantly differential quality-of-service (QoS in bit-error rate (BER is achieved in both downlink and uplink transmissions.

  3. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  4. THEORETICAL AND PRACTICAL APPROACHES REGARDING THE ADOPTION OF CORPORATE GOVERNANCE CODES

    Directory of Open Access Journals (Sweden)

    Sorin Nicolae Borlea

    2013-09-01

    Full Text Available In the European Union, the concept of corporate governance began to emerge more clearly after 1997, when most countries have however, voluntarily adopted corporate governance codes. The impulse of adopting these codes consists in the financial scandals related to the failure of the British companies listed on the stock exchange. Numerous scandals involving big companies such as Enron, WorldCom, Parmalat, Xerox, Merrill Lynch, Andersen and so on, conduct to a lack of investors’ confidence. These crises that have started to alarm governments, supervisory authorities, companies, investors and even the general public because of the fragility of the corporate governance’s system, highlight the need to rethink its structures. The process of adapting the corporate governance provisions in order to ensure transparency, responsibility and fair treatment of shareholders has resulted in the development of Corporate Governance Principles by the Organization for Economic Cooperation and Development (OECD. In order to asses these principles, it has started to identify the common elements of codes, one the most effective practice models of governance. Once the benefits of corporate governance practices have been understood and assimilated by the developed country, the developing countries (also Romania have begun to adopt "the best practices" in corporate governance, especially because this need is acutely felt in the changes required by the transition to a market economy. Our article describes the origins of the corporate governance, the concept and evolution of the corporate governance code at an international level, European level and also at a Romanian level.

  5. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  6. Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications

    Directory of Open Access Journals (Sweden)

    Huy Le

    2017-09-01

    Full Text Available For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR. AR seamlessly superimposes virtual objects on to a real environment in three dimensions (3D and in real time. One of the most important parts that helps close the gap between virtuality and reality is the marker used in the AR system. While pictorial marker and bar-code marker are the two most commonly used marker types in the market, they have some disadvantages in visual and processing performance. In this paper, we present a novelty method that combines the bar-code with the original feature of a colour picture (e.g., photos, trading cards, advertisement’s figure. Our method decorates on top of the original pictorial images additional features with a single stereogram image that optically conceals a multi-level (3D bar-code. Thus, it has a larger capability of storing data compared to the general 1D barcode. This new type of marker has the potential of addressing the issues that the current types of marker are facing. It not only keeps the original information of the picture but also contains encoded numeric information. In our limited evaluation, this pictorial bar-code shows a relatively robust performance under various conditions and scaling; thus, it provides a promising AR approach to be used in many applications such as trading card games, educations, and advertisements.

  7. Adaptive DSP Algorithms for UMTS: Blind Adaptive MMSE and PIC Multiuser Detection

    NARCIS (Netherlands)

    Potman, J.

    2003-01-01

    A study of the application of blind adaptive Minimum Mean Square Error (MMSE) and Parallel Interference Cancellation (PIC) multiuser detection techniques to Wideband Code Division Multiple Access (WCDMA), the physical layer of Universal Mobile Telecommunication System (UMTS), has been performed as

  8. Easy web interfaces to IDL code for NSTX Data Analysis

    International Nuclear Information System (INIS)

    Davis, W.M.

    2012-01-01

    Highlights: ► Web interfaces to IDL code can be developed quickly. ► Dozens of Web Tools are used effectively on NSTX for Data Analysis. ► Web interfaces are easier to use than X-window applications. - Abstract: Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.

  9. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  10. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2014-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  11. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    Austad, S. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Guillen, L. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McKnight, C. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ferguson, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  12. Data-adaptive harmonic analysis and prediction of sea level change in North Atlantic region

    Science.gov (United States)

    Kondrashov, D. A.; Chekroun, M.

    2017-12-01

    This study aims to characterize North Atlantic sea level variability across the temporal and spatial scales. We apply recently developed data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) stochastic modeling techniques [Chekroun and Kondrashov, 2017] to monthly 1993-2017 dataset of Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM altimetry fields over North Atlantic region. The key numerical feature of the DAH relies on the eigendecomposition of a matrix constructed from time-lagged spatial cross-correlations. In particular, eigenmodes form an orthogonal set of oscillating data-adaptive harmonic modes (DAHMs) that come in pairs and in exact phase quadrature for a given temporal frequency. Furthermore, the pairs of data-adaptive harmonic coefficients (DAHCs), obtained by projecting the dataset onto associated DAHMs, can be very efficiently modeled by a universal parametric family of simple nonlinear stochastic models - coupled Stuart-Landau oscillators stacked per frequency, and synchronized across different frequencies by the stochastic forcing. Despite the short record of altimetry dataset, developed DAH-MSLM model provides for skillful prediction of key dynamical and statistical features of sea level variability. References M. D. Chekroun and D. Kondrashov, Data-adaptive harmonic spectra and multilayer Stuart-Landau models. HAL preprint, 2017, https://hal.archives-ouvertes.fr/hal-01537797

  13. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yufeng Shan

    2005-02-01

    Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  14. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  15. Plasma GLP-2 levels and intestinal markers in the juvenile pig during intestinal adaptation

    DEFF Research Database (Denmark)

    Paris, Monique C; Fuller, Peter J; Carstensen, Bendix

    2004-01-01

    ) or supplemented either with fiber (n = 6) or with bovine colostrum protein concentrate (CPC; n = 10) for 8 weeks until sacrifice. Plasma GLP-2 levels were measured at weeks 0, 2, 4, and 8 postoperatively. In addition, end-stage parameters were studied at week 8 including weight gain, ileal villus height, crypt......Adaptation of the residual small bowel following resection is dependent on luminal and humoral factors. We aimed to establish if circulating levels of glucagon-like peptide (GLP-2) change under different dietary regimens following resection and to determine if there is a relationship between plasma...... GLP-2 levels and markers of intestinal adaptation. Four-week-old piglets underwent a 75% proximal small bowel resection (n = 31) or transection (n = 14). Postoperatively they received either pig chow (n = 14), nonpolymeric (elemental) infant formula (n = 7), or polymeric infant formula alone (n = 8...

  16. RAID-6Plus: A Comprised Methodology for Extending RAID-6 Codes

    Directory of Open Access Journals (Sweden)

    Ming-Zhu Deng

    2017-01-01

    Full Text Available Existing RAID-6 code extensions assume that failures are independent and instantaneous, overlooking the underlying mechanism of multifailure occurrences. Also, the effect of reconstruction window is ignored. Additionally, these coding extensions have not been adapted to occurrence patterns of failure in real-world applications. As a result, the third parity drive is set to handle the triple-failure scenario; however, the lower level failure situations have been left unattended. Therefore, a new methodology of extending RAID-6 codes named RAID-6Plus with better compromise has been studied in this paper. RAID-6Plus (Deng et al., 2015 employs short combinations which can greatly reuse overlapped elements during reconstruction to remake the third parity drive. A sample extension code called RDP+ is given based on RDP. Moreover, we extended the study to present another extension example called X-code+ which has better update penalty and load balance. The analysis shows that RAID-6Plus is a balanced tradeoff of reliability, performance, and practicality. For instance, RDP+ could achieve speedups as high as 33.4% in comparison to the RTP with conventional rebuild, 11.9% in comparison to RTP with the optimal rebuild, 47.7% in comparison to STAR with conventional rebuild, and 26.2% for a single failure rebuild.

  17. Amino acid fermentation at the origin of the genetic code.

    Science.gov (United States)

    de Vladar, Harold P

    2012-02-10

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments

  18. Adapting to Rising Sea Level: A Florida Perspective

    Science.gov (United States)

    Parkinson, Randall W.

    2009-07-01

    Global climate change and concomitant rising sea level will have a profound impact on Florida's coastal and marine systems. Sea-level rise will increase erosion of beaches, cause saltwater intrusion into water supplies, inundate coastal marshes and other important habitats, and make coastal property more vulnerable to erosion and flooding. Yet most coastal areas are currently managed under the premise that sea-level rise is not significant and the shorelines are static or can be fixed in place by engineering structures. The new reality of sea-level rise and extreme weather due to climate change requires a new style of planning and management to protect resources and reduce risk to humans. Scientists must: (1) assess existing coastal vulnerability to address short term management issues and (2) model future landscape change and develop sustainable plans to address long term planning and management issues. Furthermore, this information must be effectively transferred to planners, managers, and elected officials to ensure their decisions are based upon the best available information. While there is still some uncertainty regarding the details of rising sea level and climate change, development decisions are being made today which commit public and private investment in real estate and associated infrastructure. With a design life of 30 yrs to 75 yrs or more, many of these investments are on a collision course with rising sea level and the resulting impacts will be significant. In the near term, the utilization of engineering structures may be required, but these are not sustainable and must ultimately yield to "managed withdrawal" programs if higher sea-level elevations or rates of rise are forthcoming. As an initial step towards successful adaptation, coastal management and planning documents (i.e., comprehensive plans) must be revised to include reference to climate change and rising sea-level.

  19. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  20. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.

    2016-01-01

    , and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet

  1. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  2. Plasma. beta. -endorphin and stress hormone levels during adaptation and stress

    Energy Technology Data Exchange (ETDEWEB)

    Lishmanov, Yu.B.; Trifonova, Zh.V.; Tsibin, A.N.; Maslova, L.V.; Dement' eva, L.A.

    1987-09-01

    This paper describes a comparative study of ..beta..-endorphin and stress hormone levels in the blood plasma of rats during stress and adaptation. Immunoreactive ..beta..-endorphin in the blood plasma was assayed by means of a kit after preliminary isolation of the ..beta..-endorphin fraction by affinity chromatography on sepharose; ACTH was assayed with a kit and cortisol, insulin, thyroxine and tri-iodothyronine by means of kits from Izotop. Determination of plasma levels of ..beta..-endorphin and other opioids could evidently be an important method of assessing the state of resistance of the organism to stress.

  3. Adaptation level as the basic health status characteristics: possibilitics of its assessment and forecasting of desadaptation violations

    Directory of Open Access Journals (Sweden)

    Vysochyna I.L.

    2015-09-01

    Full Text Available On the basis of comprehensive survey with integrative assessment of health state (medical history data, physical examination, anthropometry, battery of psychological tests (Eysenck, Shmishek’s Personality Inventory (teen version, tapping - test by E.P. Ilyin, children's questionnaire of neuroses; test for rapid assessment of health, activity and mood, anxiety diagnosis by Spielberg - Khanin; Luscher test, color relations test level of adaptation was defined in 236 children from orphanages aged from 4 to 18 years. The manifestations of maladjustment were registered both on psychological level (neuroticism, high anxiety, decreased performance, activity and psychological endurance, sleep disturbance, presence of accentuation and neurotic disorders and somatic level (recurrent acute respiratory infections, poor physical development, exacerbation of chronic foci of infection and burdened biological history; this summarizes conclusions on a low level of health status of children in orphanages. The author has developed mathematical models of adaptation assessment and prediction of desadaptation, which allowed to identify children at risk for the development of adaptation disorders and children with maladjustment; according to the level and severity of maladaptive disorders correction programs are designed.

  4. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  5. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  6. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  7. Adaptation.

    Science.gov (United States)

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  8. Exploring students’ adaptive reasoning skills and van Hiele levels of geometric thinking: a case study in geometry

    Science.gov (United States)

    Rizki, H. T. N.; Frentika, D.; Wijaya, A.

    2018-03-01

    This study aims to explore junior high school students’ adaptive reasoning and the Van Hiele level of geometric thinking. The present study was a quasi-experiment with the non-equivalent control group design. The participants of the study were 34 seventh graders and 35 eighth graders in the experiment classes and 34 seventh graders and 34 eighth graders in the control classes. The students in the experiment classes learned geometry under the circumstances of a Knisley mathematical learning. The data were analyzed quantitatively by using inferential statistics. The results of data analysis show an improvement of adaptive reasoning skills both in the grade seven and grade eight. An improvement was also found for the Van Hiele level of geometric thinking. These results indicate the positive impact of Knisley learning model on students’ adaptive reasoning skills and Van Hiele level of geometric thinking.

  9. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Executive summary

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails

  10. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  11. Climate change adaptation under uncertainty in the developing world: A case study of sea level rise in Kiribati

    Science.gov (United States)

    Donner, S. D.; Webber, S.

    2011-12-01

    Climate change is expected to have the greatest impact in parts of the developing world. At the 2010 meeting of U.N. Framework Convention on Climate Change in Cancun, industrialized countries agreed in principle to provide US$100 billion per year by 2020 to assist the developing world respond to climate change. This "Green Climate Fund" is a critical step towards addressing the challenge of climate change. However, the policy and discourse on supporting adaptation in the developing world remains highly idealized. For example, the efficacy of "no regrets" adaptation efforts or "mainstreaming" adaptation into decision-making are rarely evaluated in the real world. In this presentation, I will discuss the gap between adaptation theory and practice using a multi-year case study of the cultural, social and scientific obstacles to adapting to sea level rise in the Pacific atoll nation of Kiribati. Our field research reveals how scientific and institutional uncertainty can limit international efforts to fund adaptation and lead to spiraling costs. Scientific uncertainty about hyper-local impacts of sea level rise, though irreducible, can at times limit decision-making about adaptation measures, contrary to the notion that "good" decision-making practices can incorporate scientific uncertainty. Efforts to improve institutional capacity must be done carefully, or they risk inadvertently slowing the implementation of adaptation measures and increasing the likelihood of "mal"-adaptation.

  12. Purifying selection acts on coding and non-coding sequences of paralogous genes in Arabidopsis thaliana.

    Science.gov (United States)

    Hoffmann, Robert D; Palmgren, Michael

    2016-06-13

    Whole-genome duplications in the ancestors of many diverse species provided the genetic material for evolutionary novelty. Several models explain the retention of paralogous genes. However, how these models are reflected in the evolution of coding and non-coding sequences of paralogous genes is unknown. Here, we analyzed the coding and non-coding sequences of paralogous genes in Arabidopsis thaliana and compared these sequences with those of orthologous genes in Arabidopsis lyrata. Paralogs with lower expression than their duplicate had more nonsynonymous substitutions, were more likely to fractionate, and exhibited less similar expression patterns with their orthologs in the other species. Also, lower-expressed genes had greater tissue specificity. Orthologous conserved non-coding sequences in the promoters, introns, and 3' untranslated regions were less abundant at lower-expressed genes compared to their higher-expressed paralogs. A gene ontology (GO) term enrichment analysis showed that paralogs with similar expression levels were enriched in GO terms related to ribosomes, whereas paralogs with different expression levels were enriched in terms associated with stress responses. Loss of conserved non-coding sequences in one gene of a paralogous gene pair correlates with reduced expression levels that are more tissue specific. Together with increased mutation rates in the coding sequences, this suggests that similar forces of purifying selection act on coding and non-coding sequences. We propose that coding and non-coding sequences evolve concurrently following gene duplication.

  13. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  14. Transcultural adaptation of the Breast Cancer Awareness Measure.

    Science.gov (United States)

    Al-Khasawneh, E M; Leocadio, M; Seshan, V; Siddiqui, S T; Khan, A N; Al-Manaseer, M M

    2016-09-01

    To overcome the lack of a validated and robust Arabic instrument to measure breast cancer awareness. Currently, there is no validated Arabic instrument for measuring breast cancer awareness levels. We adapted, translated and validated the Breast Cancer Awareness Measure developed by Cancer Research UK. The instrument was translated into Arabic and back-translated for validation. Validation and reliability tests were conducted using purposively sampled 972 Arab women older than 20 years, living in Oman. The adapted content was validated by a panel of medical, linguistic and cultural experts, followed by cognitive interviews (n = 10), behavioural coding (n = 30) and criterion validation (n = 646). The instrument was tested for acceptability and its subscales for internal consistency. Inter-rater reliability was estimated between two similar groups (n = 144 and n = 142) to test homogeneity. The adapted and translated instrument had a high acceptability (98.7% completed). The validation process shaped the adaptation, and resulted in strong criterion validity (R = 0.58, P Cancer Awareness Measure is a robust Arabic instrument for the measurement of breast cancer awareness and early detection practices among Arab women. The purposively selected sample may not be representative of the population. Improvement of awareness and early detection of breast cancer can contribute towards reducing mortality from the disease. The adapted instrument has policy implications, since measurement of awareness levels is essential towards breast health promotion policies in Arab countries. © 2016 International Council of Nurses.

  15. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...

  16. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  17. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  18. Adaptation to the Impacts of Sea Level Rise in the Nile Delta Coastal ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Extrants. Articles de revue. Facing the Tide - REVOLVE Magazine: Water Around the Mediterranean. Téléchargez le PDF. Rapports. Adaptation to the impacts of sea level rise in the Nile Delta coastal zone, Egypt : final project report. Téléchargez le PDF ...

  19. Limits on the adaptability of coastal marshes to rising sea level

    Science.gov (United States)

    Kirwan, Matthew L.; Guntenspergen, Glenn R.; D'Alpaos, Andrea; Morris, James T.; Mudd, Simon M.; Temmerman, Stijn

    2010-01-01

    Assumptions of a static landscape inspire predictions that about half of the world's coastal wetlands will submerge during this century in response to sea-level acceleration. In contrast, we use simulations from five numerical models to quantify the conditions under which ecogeomorphic feedbacks allow coastal wetlands to adapt to projected changes in sea level. In contrast to previous sea-level assessments, we find that non-linear feedbacks among inundation, plant growth, organic matter accretion, and sediment deposition, allow marshes to survive conservative projections of sea-level rise where suspended sediment concentrations are greater than ~20 mg/L. Under scenarios of more rapid sea-level rise (e.g., those that include ice sheet melting), marshes will likely submerge near the end of the 21st century. Our results emphasize that in areas of rapid geomorphic change, predicting the response of ecosystems to climate change requires consideration of the ability of biological processes to modify their physical environment.

  20. Integrating conservation costs into sea level rise adaptive conservation prioritization

    Directory of Open Access Journals (Sweden)

    Mingjian Zhu

    2015-07-01

    Full Text Available Biodiversity conservation requires strategic investment as resources for conservation are often limited. As sea level rises, it is important and necessary to consider both sea level rise and costs in conservation decision making. In this study, we consider costs of conservation in an integrated modeling process that incorporates a geomorphological model (SLAMM, species habitat models, and conservation prioritization (Zonation to identify conservation priorities in the face of landscape dynamics due to sea level rise in the Matanzas River basin of northeast Florida. Compared to conservation priorities that do not consider land costs in the analysis process, conservation priorities that consider costs in the planning process change significantly. The comparison demonstrates that some areas with high conservation values might be identified as lower priorities when integrating economic costs in the planning process and some areas with low conservation values might be identified as high priorities when considering costs in the planning process. This research could help coastal resources managers make informed decisions about where and how to allocate conservation resources more wisely to facilitate biodiversity adaptation to sea level rise.

  1. State of Mechanisms of Adaptation to Teaching Loads for High-school Students with Different Levels of Professional Preparedness

    Directory of Open Access Journals (Sweden)

    G.N. Danilenko

    2013-04-01

    Full Text Available Evaluation of functional adaptability of 69 high-school students with different levels of professional preparedness had been carried out. The dynamics of the indices of heart rate variability and hemodynamics indices during the academic year had been studied. The difference in adaptive capacity, depending on the personal characteristics of students, the level of preparedness of adolescents to professional choice had been shown.

  2. Global cost analysis on adaptation to sea level rise based on RCP/SSP scenarios

    Science.gov (United States)

    Kumano, N.; Tamura, M.; Yotsukuri, M.; Kuwahara, Y.; Yokoki, H.

    2017-12-01

    Low-lying areas are the most vulnerable to sea level rise (SLR) due to climate change in the future. In order to adapt to SLR, it is necessary to decide whether to retreat from vulnerable areas or to install dykes to protect them from inundation. Therefore, cost- analysis of adaptation using coastal dykes is one of the most essential issues in the context of climate change and its countermeasures. However, few studies have globally evaluated the future costs of adaptation in coastal areas. This study tries to globally analyze the cost of adaptation in coastal areas. First, global distributions of projected inundation impacts induced by SLR including astronomical high tide were assessed. Economic damage was estimated on the basis of the econometric relationship between past hydrological disasters, affected population, and per capita GDP using CRED's EM-DAT database. Second, the cost of adaptation was also determined using the cost database and future scenarios. The authors have built a cost database for installed coastal dykes worldwide and applied it to estimating the future cost of adaptation. The unit costs of dyke construction will increase with socio-economic scenario (SSP) such as per capita GDP. Length of vulnerable coastline is calculated by identifying inundation areas using ETOPO1. Future cost was obtained by multiplying the length of vulnerable coastline and the unit cost of dyke construction. Third, the effectiveness of dyke construction was estimated by comparing cases with and without adaptation.As a result, it was found that incremental adaptation cost is lower than economic damage in the cases of SSP1 and SSP3 under RCP scenario, while the cost of adaptation depends on the durability of the coastal dykes.

  3. Climate Change Adaptation Tools at the Community Level: An Integrated Literature Review

    Directory of Open Access Journals (Sweden)

    Elvis Modikela Nkoana

    2018-03-01

    Full Text Available The negative impacts of climate change are experienced at the global, regional and local levels. However, rural communities in sub-Saharan Africa face additional socio-political, cultural and economic challenges in addition to climate change. Decision support tools have been developed and applied to assist rural communities to cope with and adapt to climate change. However, poorly planned participatory processes and the lack of context-specific approaches in these tools are obstacles when aiming at strengthening the resilience of these rural communities. This paper uses an integrated literature review to identify best practices for involving rural communities in climate change adaptation efforts through the application of context-specific and culturally-sensitive climate change adaptation tools. These best practices include the use of a livelihoods approach to engage communities; the explicit acknowledgement of the local cultural do’s and don’ts; the recognition of local champions appointed from within the local community; the identification and prioritisation of vulnerable stakeholders; and the implementation of a two-way climate change risk communication instead of a one-sided information sharing approach.

  4. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Science.gov (United States)

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  5. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  6. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  7. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  8. WWER core pattern enhancement using adaptive improved harmony search

    International Nuclear Information System (INIS)

    Nazari, T.; Aghaie, M.; Zolfaghari, A.; Minuchehr, A.; Norouzi, A.

    2013-01-01

    Highlights: ► The classical and improved harmony search algorithms are introduced. ► The advantage of IHS is demonstrated in Shekel's Foxholes. ► The CHS and IHS are compared with other Heuristic algorithms. ► The adaptive improved harmony search is applied for two cases. ► Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k eff , by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  9. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  10. UEP Concepts in Modulation and Coding

    Directory of Open Access Journals (Sweden)

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  11. Delay Estimation in Long-Code Asynchronous DS/CDMA Systems Using Multiple Antennas

    Directory of Open Access Journals (Sweden)

    Sirbu Marius

    2004-01-01

    Full Text Available The problem of propagation delay estimation in asynchronous long-code DS-CDMA multiuser systems is addressed. Almost all the methods proposed so far in the literature for propagation delay estimation are derived for short codes and the knowledge of the codes is exploited by the estimators. In long-code CDMA, the spreading code is aperiodic and the methods developed for short codes may not be used or may increase the complexity significantly. For example, in the subspace-based estimators, the aperiodic nature of the code may require subspace tracking. In this paper we propose a novel method for simultaneous estimation of the propagation delays of several active users. A specific multiple-input multiple-output (MIMO system model is constructed in a multiuser scenario. In such model the channel matrix contains information about both the users propagation delays and channel impulse responses. Consequently, estimates of the delays are obtained as a by-product of the channel estimation task. The channel matrix has a special structure that is exploited in estimating the delays. The proposed delay estimation method lends itself to an adaptive implementation. Thus, it may be applied to joint channel and delay estimation in uplink DS-CDMA analogously to the method presented by the authors in 2003. The performance of the proposed method is studied in simulation using realistic time-varying channel model and different SNR levels in the face of near-far effects, and using low spreading factor (high data rates.

  12. Gyroaveraging operations using adaptive matrix operators

    Science.gov (United States)

    Dominski, Julien; Ku, Seung-Hoe; Chang, Choong-Seock

    2018-05-01

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidal equilibrium has been studied. A successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.

  13. Learning of spatio-temporal codes in a coupled oscillator system.

    Science.gov (United States)

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  14. Specification of a test problem for HYDROCOIN [Hydrologic Code Intercomparison] Level 3 Case 2: Sensitivity analysis for deep disposal in partially saturated, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.

    1987-08-01

    The international Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive waste repositories. Three principal activities in the HYDROCOIN Project are Level 1, verification and benchmarking of hydrologic codes; Level 2, validation of hydrologic models; and Level 3, sensitivity and uncertainty analyses of the models and codes. This report presents a test case defined for the HYDROCOIN Level 3 activity to explore the feasibility of applying various sensitivity-analysis methodologies to a highly nonlinear model of isothermal, partially saturated flow through fractured tuff, and to develop modeling approaches to implement the methodologies for sensitivity analysis. These analyses involve an idealized representation of a repository sited above the water table in a layered sequence of welded and nonwelded, fractured, volcanic tuffs. The analyses suggested here include one-dimensional, steady flow; one-dimensional, nonsteady flow; and two-dimensional, steady flow. Performance measures to be used to evaluate model sensitivities are also defined; the measures are related to regulatory criteria for containment of high-level radioactive waste. 14 refs., 5 figs., 4 tabs

  15. Multi-level policies and adaptive social networks – a conceptual modeling study for maintaining a polycentric governance system

    Directory of Open Access Journals (Sweden)

    Jean-Denis Mathias

    2017-03-01

    Full Text Available Information and collaboration patterns embedded in social networks play key roles in multilevel and polycentric modes of governance. However, modeling the dynamics of such social networks in multilevel settings has been seldom addressed in the literature. Here we use an adaptive social network model to elaborate the interplay between a central and a local government in order to maintain a polycentric governance. More specifically, our analysis explores in what ways specific policy choices made by a central agent affect the features of an emerging social network composed of local organizations and local users. Using two types of stylized policies, adaptive co-management and adaptive one-level management, we focus on the benefits of multi-level adaptive cooperation for network management. Our analysis uses viability theory to explore and to quantify the ability of these policies to achieve specific network properties. Viability theory gives the family of policies that enables maintaining the polycentric governance unlike optimal control that gives a unique blueprint. We found that the viability of the policies can change dramatically depending on the goals and features of the social network. For some social networks, we also found a very large difference between the viability of the adaptive one-level management and adaptive co-management policies. However, results also show that adaptive co-management doesn’t always provide benefits. Hence, we argue that applying viability theory to governance networks can help policy design by analyzing the trade-off between the costs of adaptive co-management and the benefits associated with its ability to maintain desirable social network properties in a polycentric governance framework.

  16. Research and Design in Unified Coding Architecture for Smart Grids

    Directory of Open Access Journals (Sweden)

    Gang Han

    2013-09-01

    Full Text Available Standardized and sharing information platform is the foundation of the Smart Grids. In order to improve the dispatching center information integration of the power grids and achieve efficient data exchange, sharing and interoperability, a unified coding architecture is proposed. The architecture includes coding management layer, coding generation layer, information models layer and application system layer. Hierarchical design makes the whole coding architecture to adapt to different application environments, different interfaces, loosely coupled requirements, which can realize the integration model management function of the power grids. The life cycle and evaluation method of survival of unified coding architecture is proposed. It can ensure the stability and availability of the coding architecture. Finally, the development direction of coding technology of the Smart Grids in future is prospected.

  17. A Climate Change Adaptation Planning Process for Low-Lying, Communities Vulnerable to Sea Level Rise

    Directory of Open Access Journals (Sweden)

    Kristi Tatebe

    2012-09-01

    Full Text Available While the province of British Columbia (BC, Canada, provides guidelines for flood risk management, it is local governments’ responsibility to delineate their own flood vulnerability, assess their risk, and integrate these with planning policies to implement adaptive action. However, barriers such as the lack of locally specific data and public perceptions about adaptation options mean that local governments must address the need for adaptation planning within a context of scientific uncertainty, while building public support for difficult choices on flood-related climate policy and action. This research demonstrates a process to model, visualize and evaluate potential flood impacts and adaptation options for the community of Delta, in Metro Vancouver, across economic, social and environmental perspectives. Visualizations in 2D and 3D, based on hydrological modeling of breach events for existing dike infrastructure, future sea level rise and storm surges, are generated collaboratively, together with future adaptation scenarios assessed against quantitative and qualitative indicators. This ‘visioning package’ is being used with staff and a citizens’ Working Group to assess the performance, policy implications and social acceptability of the adaptation strategies. Recommendations based on the experience of the initiative are provided that can facilitate sustainable future adaptation actions and decision-making in Delta and other jurisdictions.

  18. Radio-adaptation: cellular and molecular features of a response to low levels of ionizing radiation

    International Nuclear Information System (INIS)

    Rigaud, O.

    1998-01-01

    It is well established that sublethal doses of DNA damaging agents induce protective mechanisms against a subsequent high dose treatment ; for instance, the phenomenon of radio-adaptation in the case of ionizing radiations. Since the early observation described in 1984, numerous studies have confirmed the radio-adaptive response in terms of reduction of chromosomal breaks for varied biological models in vitro and in vivo. Evidence for an adaptive response against the induction of gene mutations and the lethal effect is clearly demonstrated. This paper reviews the experimental results describing various aspects of these adaptive responses expressed on these different biological end-points. The molecular mechanism underlying radio-adaptation still remains nuclear. The development of this phenomenon requires de novo synthesis of transcripts and proteins during the time interval between the two doses. Some data are consistent with the hypotheses that these gene products would be involved in the activation of DNA repair pathways and antioxidant systems. However, a major question still remains unanswered; indeed, it is not clear whether or not the radio-adaptation could affect the estimation of cancer risk related with low level exposure to ionizing radiation, a major concern in radioprotection. Until such data are available, it is yet unwise to evoke the beneficial effects of radio-adaptation. (authors)

  19. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  20. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  1. KEWPIE: a dynamical cascade code for decaying exited compound nuclei

    OpenAIRE

    Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David

    2003-01-01

    A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical...

  2. Large-eddy simulation of stratified atmospheric flows with the CFD code Code-Saturne

    International Nuclear Information System (INIS)

    Dall'Ozzo, Cedric

    2013-01-01

    Large-eddy simulation (LES) of the physical processes in the atmospheric boundary layer (ABL) remains a complex subject. LES models have difficulties to capture the evolution of the turbulence in different conditions of stratification. Consequently, LES of the whole diurnal cycle of the ABL including convective situations in daytime and stable situations in the nighttime is seldom documented. The simulation of the stable atmospheric boundary layer which is characterized by small eddies and by weak and sporadic turbulence is especially difficult. Therefore The LES ability to well reproduce real meteorological conditions, particularly in stable situations, is studied with the CFD code developed by EDF R and D, Code-Saturne. The first study consist in validate LES on a quasi-steady state convective case with homogeneous terrain. The influence of the sub-grid-scale models (Smagorinsky model, Germano-Lilly model, Wong-Lilly model and Wall-Adapting Local Eddy-viscosity model) and the sensitivity to the parametrization method on the mean fields, flux and variances are discussed. In a second study, the diurnal cycle of the ABL during Wangara experiment is simulated. The deviation from the measurement is weak during the day, so this work is focused on the difficulties met during the night to simulate the stable atmospheric boundary layer. The impact of the different sub-grid-scale models and the sensitivity to the Smagorinsky constant are been analysed. By coupling radiative forcing with LES, the consequences of infra-red and solar radiation on the nocturnal low level jet and on thermal gradient, close to the surface, are exposed. More, enhancement of the domain resolution to the turbulence intensity and the strong atmospheric stability during the Wangara experiment are analysed. Finally, a study of the numerical oscillations inherent to Code-Saturne is realized in order to decrease their effects. (author) [fr

  3. A multi-attribute approach to choosing adaptation strategies: Application to sea-level rise

    International Nuclear Information System (INIS)

    Smith, A.E.; Chu, H.Q.

    1994-01-01

    Selecting good adaptation strategies in anticipation of climate change is gaining increasing attention as it becomes increasingly clear that much of the likely change is already committed, and could not be avoided even with aggressive and immediate emissions reductions. Adaptation decision making will place special requirements on regional and local planners in the US and other countries, especially developing countries. Approaches, tools, and guidance will be useful to assist in an effective response to the challenge. This paper describes the value of using a multi-attribute approach for evaluating adaptation strategies and its implementation as a decision-support software tool to help planners understand and execute this approach. The multi-attribute approach described here explicitly addresses the fact that many aspects of the decision cannot be easily quantified, that future conditions are highly uncertain, and that there are issues of equity, flexibility, and coordination that may be as important to the decision as costs and benefits. The approach suggested also avoids trying to collapse information on all of the attributes to a single metric. Such metrics can obliterate insights about the nature of the trade-offs that must be made in choosing among very dissimilar types of responses to the anticipated threat of climate change. Implementation of such an approach requires management of much information, and an ability to easily manipulate its presentation while seeking acceptable trade-offs. The Adaptation Strategy Evaluator (ASE) was developed under funding from the US Environmental Protection Agency to provide user-friendly, PC-based guidance through the major steps of a multi-attribute evaluation. The initial application of ASE, and the focus of this paper, is adaptation to sea level rise. However, the approach can be easily adapted to any multi-attribute choice problem, including the range of other adaptation planning needs

  4. Amino acid fermentation at the origin of the genetic code

    Science.gov (United States)

    2012-01-01

    There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP) yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can act and refine the assignments

  5. Amino acid fermentation at the origin of the genetic code

    Directory of Open Access Journals (Sweden)

    de Vladar Harold P

    2012-02-01

    Full Text Available Abstract There is evidence that the genetic code was established prior to the existence of proteins, when metabolism was powered by ribozymes. Also, early proto-organisms had to rely on simple anaerobic bioenergetic processes. In this work I propose that amino acid fermentation powered metabolism in the RNA world, and that this was facilitated by proto-adapters, the precursors of the tRNAs. Amino acids were used as carbon sources rather than as catalytic or structural elements. In modern bacteria, amino acid fermentation is known as the Stickland reaction. This pathway involves two amino acids: the first undergoes oxidative deamination, and the second acts as an electron acceptor through reductive deamination. This redox reaction results in two keto acids that are employed to synthesise ATP via substrate-level phosphorylation. The Stickland reaction is the basic bioenergetic pathway of some bacteria of the genus Clostridium. Two other facts support Stickland fermentation in the RNA world. First, several Stickland amino acid pairs are synthesised in abiotic amino acid synthesis. This suggests that amino acids that could be used as an energy substrate were freely available. Second, anticodons that have complementary sequences often correspond to amino acids that form Stickland pairs. The main hypothesis of this paper is that pairs of complementary proto-adapters were assigned to Stickland amino acids pairs. There are signatures of this hypothesis in the genetic code. Furthermore, it is argued that the proto-adapters formed double strands that brought amino acid pairs into proximity to facilitate their mutual redox reaction, structurally constraining the anticodon pairs that are assigned to these amino acid pairs. Significance tests which randomise the code are performed to study the extent of the variability of the energetic (ATP yield. Random assignments can lead to a substantial yield of ATP and maintain enough variability, thus selection can

  6. Behavioural strategy: Adaptability context

    Directory of Open Access Journals (Sweden)

    Piórkowska Katarzyna

    2016-05-01

    Full Text Available The paper is embedded in the following fields: strategic management in terms of behavioural strategy concept, adaptability construct, and micro-foundations realm as well as organizational theory and psychology. Moreover, the paper concerns to some extent a multi-level approach in strategic management involving individual, team, and organizational level. The aim of the paper is to contribute to extend, on one hand, the ascertainment set in the field of behavioural strategy as behavioural strategy encompasses a mindboggling diversity of topics and methods and its conceptual unity has been hard to achieve (Powell, Lovallo, Fox 2011, p. 1371, and on the other hand, to order mixed approaches to adaptability especially to gain insights on micro-level adapting processes (individual adaptability and adaptive performance in terms of the multi-level approach. The method that has been used is literature studies and the interference is mostly deductive. The structure of the manuscript is four-fold. The first part involves the considerations in the field of adaptability and adaptive performance at the individual level. The issues of adaptability and adaptive performance at the team level have been presented in the second part. The third part encompasses the organizational adaptability assertions. Finally, the conclusion, limitations of the considerations highlighted as well as the future research directions have been emphasized. The overarching key finding is that the behavioural strategy concept may constitute the boundary spanner in exploring and explaining adaptability phenomenon at different levels of analysis.

  7. Contribution of cerebellar sensorimotor adaptation to hippocampal spatial memory.

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Passot

    Full Text Available Complementing its primary role in motor control, cerebellar learning has also a bottom-up influence on cognitive functions, where high-level representations build up from elementary sensorimotor memories. In this paper we examine the cerebellar contribution to both procedural and declarative components of spatial cognition. To do so, we model a functional interplay between the cerebellum and the hippocampal formation during goal-oriented navigation. We reinterpret and complete existing genetic behavioural observations by means of quantitative accounts that cross-link synaptic plasticity mechanisms, single cell and population coding properties, and behavioural responses. In contrast to earlier hypotheses positing only a purely procedural impact of cerebellar adaptation deficits, our results suggest a cerebellar involvement in high-level aspects of behaviour. In particular, we propose that cerebellar learning mechanisms may influence hippocampal place fields, by contributing to the path integration process. Our simulations predict differences in place-cell discharge properties between normal mice and L7-PKCI mutant mice lacking long-term depression at cerebellar parallel fibre-Purkinje cell synapses. On the behavioural level, these results suggest that, by influencing the accuracy of hippocampal spatial codes, cerebellar deficits may impact the exploration-exploitation balance during spatial navigation.

  8. A general purpose code for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wilcke, W.W.; Rochester Univ., NY

    1984-01-01

    A general-purpose computer code MONTHY has been written to perform Monte Carlo simulations of physical systems. To achieve a high degree of flexibility the code is organized like a general purpose computer, operating on a vector describing the time dependent state of the system under simulation. The instruction set of the 'computer' is defined by the user and is therefore adaptable to the particular problem studied. The organization of MONTHY allows iterative and conditional execution of operations. (orig.)

  9. Preparation of the TRANSURANUS code for TEMELIN NPP

    International Nuclear Information System (INIS)

    Klouzal, J.

    2011-01-01

    Since 2010 Temelin NPP started using TVSA-T fuel supplied by JSC TVEL. The transition process included implementation of several new core reload design codes. TRANSURANUS code was selected for the evaluation of the fuel rod thermomechanical performance. The adaptation and validation of the code was performed by Nuclear Research Institute Rez. TRANSURANUS code contains wide selection of alternative models for most of phenomena important for the fuel behaviour. It was therefore necessary to select, based on a comparison with experimental data, those most suitable for the modeling of TVSA-T fuel rods. In some cases, new models were implemented. Software tools and methodology for the evaluation of the proposed core reload design using TRANSURANUS code were also developed in NRI. The software tools include the interface to core physics code ANDREA and a set of scripts for an automated execution and processing of the computational runs. Independent confirmation of some of the vendor specified core reload design criteria was performed using TRANSURANUS. (authors)

  10. Benchmark problems for radiological assessment codes. Final report

    International Nuclear Information System (INIS)

    Mills, M.; Vogt, D.; Mann, B.

    1983-09-01

    This report describes benchmark problems to test computer codes used in the radiological assessment of high-level waste repositories. The problems presented in this report will test two types of codes. The first type of code calculates the time-dependent heat generation and radionuclide inventory associated with a high-level waste package. Five problems have been specified for this code type. The second code type addressed in this report involves the calculation of radionuclide transport and dose-to-man. For these codes, a comprehensive problem and two subproblems have been designed to test the relevant capabilities of these codes for assessing a high-level waste repository setting

  11. Subband Adaptive Array for DS-CDMA Mobile Radio

    Directory of Open Access Journals (Sweden)

    Tran Xuan Nam

    2004-01-01

    Full Text Available We propose a novel scheme of subband adaptive array (SBAA for direct-sequence code division multiple access (DS-CDMA. The scheme exploits the spreading code and pilot signal as the reference signal to estimate the propagation channel. Moreover, instead of combining the array outputs at each output tap using a synthesis filter and then despreading them, we despread directly the array outputs at each output tap by the desired user's code to save the synthesis filter. Although its configuration is far different from that of 2D RAKEs, the proposed scheme exhibits relatively equivalent performance of 2D RAKEs while having less computation load due to utilising adaptive signal processing in subbands. Simulation programs are carried out to explore the performance of the scheme and compare its performance with that of the standard 2D RAKE.

  12. CAreDroid: Adaptation Framework for Android Context-Aware Applications.

    Science.gov (United States)

    Elmalaki, Salma; Wanner, Lucas; Srivastava, Mani

    2015-09-01

    Context-awareness is the ability of software systems to sense and adapt to their physical environment. Many contemporary mobile applications adapt to changing locations, connectivity states, available computational and energy resources, and proximity to other users and devices. Nevertheless, there is little systematic support for context-awareness in contemporary mobile operating systems. Because of this, application developers must build their own context-awareness adaptation engines, dealing directly with sensors and polluting application code with complex adaptation decisions. In this paper, we introduce CAreDroid, which is a framework that is designed to decouple the application logic from the complex adaptation decisions in Android context-aware applications. In this framework, developers are required- only-to focus on the application logic by providing a list of methods that are sensitive to certain contexts along with the permissible operating ranges under those contexts. At run time, CAreDroid monitors the context of the physical environment and intercepts calls to sensitive methods, activating only the blocks of code that best fit the current physical context. CAreDroid is implemented as part of the Android runtime system. By pushing context monitoring and adaptation into the runtime system, CAreDroid eases the development of context-aware applications and increases their efficiency. In particular, case study applications implemented using CAre-Droid are shown to have: (1) at least half lines of code fewer and (2) at least 10× more efficient in execution time compared to equivalent context-aware applications that use only standard Android APIs.

  13. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  14. Italian Adaptation of the "Autonomy and Relatedness Coding System"

    Directory of Open Access Journals (Sweden)

    Sonia Ingoglia

    2013-08-01

    Full Text Available The study examined the applicability of the observational technique developed by Allen and colleagues (Allen, Hauser, Bell, & O’Connor, 1994; Allen, Hauser, et al., 2003 to investigate the issues of autonomy and relatedness in parent-adolescent relationship in the Italian context. Thirty-five mother-adolescent dyads participated to a task in which they discussed a family issue about which they disagree. Adolescents were also administered a self-report measure assessing their relationship with mothers. Mothers reported significantly higher levels of promoting and inhibiting autonomy, and promoting relatedness behaviors than their children. Results also suggested a partial behavioral reciprocity within the dyads, regarding promoting and inhibiting relatedness, and inhibiting autonomy. Finally, mothers’ inhibiting autonomy behaviors positively correlated to teens’ perception of their relationship as conflicting; adolescents’ inhibiting and promoting autonomy and inhibiting relatedness behaviors positively correlated to open confrontation, rejection and coolness, while promoting relatedness behaviors negatively correlated to open confrontation, rejection and coolness. The results suggest that, for Italian mothers, behaviors linked to autonomy seem to be associated with being involved in a more negative relationship with their children, even if not characterized by open hostility, while for Italian adolescents, behaviors linked to autonomy seem to be associated with threatening the closeness of the relationship. Globally, the findings suggest that the application of this observational procedure may help our understanding of youth autonomy and relatedness development in Italy, but they leave unanswered questions regarding its appropriate adaptation and the role played by cultural differences.

  15. Layer-based buffer aware rate adaptation design for SHVC video streaming

    Science.gov (United States)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  16. WWER core pattern enhancement using adaptive improved harmony search

    Energy Technology Data Exchange (ETDEWEB)

    Nazari, T. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Aghaie, M., E-mail: M_Aghaie@sbu.ac.ir [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A.; Norouzi, A. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer The classical and improved harmony search algorithms are introduced. Black-Right-Pointing-Pointer The advantage of IHS is demonstrated in Shekel's Foxholes. Black-Right-Pointing-Pointer The CHS and IHS are compared with other Heuristic algorithms. Black-Right-Pointing-Pointer The adaptive improved harmony search is applied for two cases. Black-Right-Pointing-Pointer Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k{sub eff}, by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  17. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  18. Radiological analyses of intermediate and low level supercompacted waste drums by VQAD code

    International Nuclear Information System (INIS)

    Bace, M.; Trontl, K.; Gergeta, K.

    2004-01-01

    In order to increase the possibilities of the QAD-CGGP code, as well as to make the code more user friendly, modifications of the code have been performed. A general multisource option has been introduced into the code and a user friendly environment has been created through a Graphical User Interface. The improved version of the code has been used to calculate gamma dose rates of a single supercompacted waste drum and a pair of supercompacted waste drums. The results of the calculation were compared with the standard QAD-CGGP results. (author)

  19. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses.

    Science.gov (United States)

    Burke, Christopher J; Baddeley, Michelle; Tobler, Philippe N; Schultz, Wolfram

    2016-09-28

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise

  20. Simulation and Rapid Prototyping of Adaptive Control Systems using the Adaptive Blockset for Simulink

    DEFF Research Database (Denmark)

    Ravn, Ole

    1998-01-01

    The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller implement...... design, controller and state variable filter.The use of the Adaptive Blockset is demonstrated using a simple laboratory setup. Both the use of the blockset for simulation and for rapid prototyping of a real-time controller are shown.......The paper describes the design considerations and implementational aspects of the Adaptive Blockset for Simulink which has been developed in a prototype implementation. The concept behind the Adaptive Blockset for Simulink is to bridge the gap between simulation and prototype controller...... implementation. This is done using the code generation capabilities of Real Time Workshop in combination with C s-function blocks for adaptive control in Simulink. In the paper the design of each group of blocks normally found in adaptive controllers is outlined. The block types are, identification, controller...

  1. A CABAC codec of H.264AVC with secure arithmetic coding

    Science.gov (United States)

    Neji, Nihel; Jridi, Maher; Alfalou, Ayman; Masmoudi, Nouri

    2013-02-01

    This paper presents an optimized H.264/AVC coding system for HDTV displays based on a typical flow with high coding efficiency and statics adaptivity features. For high quality streaming, the codec uses a Binary Arithmetic Encoding/Decoding algorithm with high complexity and a JVCE (Joint Video compression and encryption) scheme. In fact, particular attention is given to simultaneous compression and encryption applications to gain security without compromising the speed of transactions [1]. The proposed design allows us to encrypt the information using a pseudo-random number generator (PRNG). Thus we achieved the two operations (compression and encryption) simultaneously and in a dependent manner which is a novelty in this kind of architecture. Moreover, we investigated the hardware implementation of CABAC (Context-based adaptive Binary Arithmetic Coding) codec. The proposed architecture is based on optimized binarizer/de-binarizer to handle significant pixel rates videos with low cost and high performance for most frequent SEs. This was checked using HD video frames. The obtained synthesis results using an FPGA (Xilinx's ISE) show that our design is relevant to code main profile video stream.

  2. Regional Interdependence in Adaptation to Sea Level Rise and Coastal Flooding

    Science.gov (United States)

    Stacey, M. T.; Lubell, M.; Hummel, M.; Wang, R. Q.; Barnard, P.; Erikson, L. H.; Herdman, L.; Pozdnukhov, A.; Sheehan, M.

    2017-12-01

    Projections of sea level rise may differ in the pace of change, but there is clear consensus that coastal communities will be facing more frequent and severe flooding events in the coming century. As communities adapt to future conditions, infrastructure systems will be developed, modified and abandoned, with important consequences for services and resilience. Whether action or inaction is pursued, the decisions made by an individual community regarding a single infrastructure system have implications that extend spatially and temporally due to geographic and infrastructure system interactions. At the same time, there are a number of barriers to collective or coordinated action that inhibit regional solutions. This interplay between local actions and regional responses is one of the great challenges facing decision-makers grappling with both local and regional climate-change adaptation. In this talk, I present case studies of the San Francisco Bay Area that examine how shoreline infrastructure, transporation sytems and decision-making networks interact to define the regional response to local actions and the local response to regional actions. I will characterize the barriers that exist to regional solutions, and characterize three types of interdependence that may motivate decision-makers to overcome those barriers. Using these examples, I will discuss the importance of interdisciplinary analyses that integrate the natural sciences, engineering and the social science to climate change adaptation more generally.

  3. Sequencing of 50 human exomes reveals adaptation to high altitude

    DEFF Research Database (Denmark)

    Yi, Xin; Liang, Yu; Huerta-Sanchez, Emilia

    2010-01-01

    Residents of the Tibetan Plateau show heritable adaptations to extreme altitude. We sequenced 50 exomes of ethnic Tibetans, encompassing coding sequences of 92% of human genes, with an average coverage of 18x per individual. Genes showing population-specific allele frequency changes, which repres...... in genetic adaptation to high altitude.......Residents of the Tibetan Plateau show heritable adaptations to extreme altitude. We sequenced 50 exomes of ethnic Tibetans, encompassing coding sequences of 92% of human genes, with an average coverage of 18x per individual. Genes showing population-specific allele frequency changes, which...... represent strong candidates for altitude adaptation, were identified. The strongest signal of natural selection came from endothelial Per-Arnt-Sim (PAS) domain protein 1 (EPAS1), a transcription factor involved in response to hypoxia. One single-nucleotide polymorphism (SNP) at EPAS1 shows a 78% frequency...

  4. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  5. Influence of physical education on the level of adaptation of students to educational activity.

    Directory of Open Access Journals (Sweden)

    Korolinska S.V.

    2012-06-01

    Full Text Available Examined and summarized problems of adaptation of students to educational activity. 100 students took part in research. Found out a row socially psychological factors which determine efficiency of process of adaptation of students to the scientific process. Practical recommendations are developed on organization of educational process of students. It is recommended widely to utillize a physical culture as mean of reduction of adaptation period and increase of level of physical and mental capacity. It is marked that almost 90% students have rejections in a health. Also over 50% - unsatisfactory physical preparedness. It is set that for the students of the II course the indexes of low situation anxiety prevail as compared to the I course. It is set that the characteristic feature of the psychological state during an examination session is emotionally volitional instability.

  6. Duals of Affine Grassmann Codes and Their Relatives

    DEFF Research Database (Denmark)

    Beelen, P.; Ghorpade, S. R.; Hoholdt, T.

    2012-01-01

    Affine Grassmann codes are a variant of generalized Reed-Muller codes and are closely related to Grassmann codes. These codes were introduced in a recent work by Beelen Here, we consider, more generally, affine Grassmann codes of a given level. We explicitly determine the dual of an affine...... Grassmann code of any level and compute its minimum distance. Further, we ameliorate the results by Beelen concerning the automorphism group of affine Grassmann codes. Finally, we prove that affine Grassmann codes and their duals have the property that they are linear codes generated by their minimum......-weight codewords. This provides a clean analogue of a corresponding result for generalized Reed-Muller codes....

  7. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  8. Critical roles for a genetic code alteration in the evolution of the genus Candida.

    Science.gov (United States)

    Silva, Raquel M; Paredes, João A; Moura, Gabriela R; Manadas, Bruno; Lima-Costa, Tatiana; Rocha, Rita; Miranda, Isabel; Gomes, Ana C; Koerkamp, Marian J G; Perrot, Michel; Holstege, Frank C P; Boucherie, Hélian; Santos, Manuel A S

    2007-10-31

    During the last 30 years, several alterations to the standard genetic code have been discovered in various bacterial and eukaryotic species. Sense and nonsense codons have been reassigned or reprogrammed to expand the genetic code to selenocysteine and pyrrolysine. These discoveries highlight unexpected flexibility in the genetic code, but do not elucidate how the organisms survived the proteome chaos generated by codon identity redefinition. In order to shed new light on this question, we have reconstructed a Candida genetic code alteration in Saccharomyces cerevisiae and used a combination of DNA microarrays, proteomics and genetics approaches to evaluate its impact on gene expression, adaptation and sexual reproduction. This genetic manipulation blocked mating, locked yeast in a diploid state, remodelled gene expression and created stress cross-protection that generated adaptive advantages under environmental challenging conditions. This study highlights unanticipated roles for codon identity redefinition during the evolution of the genus Candida, and strongly suggests that genetic code alterations create genetic barriers that speed up speciation.

  9. Auto-Regulatory RNA Editing Fine-Tunes mRNA Re-Coding and Complex Behaviour in Drosophila

    Science.gov (United States)

    Savva, Yiannis A.; Jepson, James E.C; Sahin, Asli; Sugden, Arthur U.; Dorsky, Jacquelyn S.; Alpert, Lauren; Lawrence, Charles; Reenan, Robert A.

    2014-01-01

    Auto-regulatory feedback loops are a common molecular strategy used to optimize protein function. In Drosophila many mRNAs involved in neuro-transmission are re-coded at the RNA level by the RNA editing enzyme dADAR, leading to the incorporation of amino acids that are not directly encoded by the genome. dADAR also re-codes its own transcript, but the consequences of this auto-regulation in vivo are unclear. Here we show that hard-wiring or abolishing endogenous dADAR auto-regulation dramatically remodels the landscape of re-coding events in a site-specific manner. These molecular phenotypes correlate with altered localization of dADAR within the nuclear compartment. Furthermore, auto-editing exhibits sexually dimorphic patterns of spatial regulation and can be modified by abiotic environmental factors. Finally, we demonstrate that modifying dAdar auto-editing affects adaptive complex behaviors. Our results reveal the in vivo relevance of auto-regulatory control over post-transcriptional mRNA re-coding events in fine-tuning brain function and organismal behavior. PMID:22531175

  10. NALAP: an LMFBR system transient code

    International Nuclear Information System (INIS)

    Martin, B.A.; Agrawal, A.K.; Albright, D.C.; Epel, L.G.; Maise, G.

    1975-07-01

    NALAP is a LMFBR system transient code. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic response of sodium cooled fast breeder reactors when subjected to postulated accidents such as a massive pipe break as well as a variety of other upset conditions that do not disrupt the system geometry. Various components of the plant are represented by control volumes. These control volumes are connected by junctions some of which may be leak or fill junctions. The fluid flow equations are modeled as compressible, single-stream flow with momentum flux in one dimension. The transient response is computed by integrating the thermal-hydraulic conservation equations from user-initialized operating conditions by an implicit numerical scheme. Point kinetics approximation is used to represent the time dependent heat generation in the reactor core

  11. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  12. An adaptive neuro-fuzzy controller for mold level control in continuous casting

    International Nuclear Information System (INIS)

    Zolghadri Jahromi, M.; Abolhassan Tash, F.

    2001-01-01

    Mold variations in continuous casting are believed to be the main cause of surface defects in the final product. Although a Pid controller is well capable of controlling the level under normal conditions, it cannot prevent large variations of mold level when a disturbance occurs in the form of nozzle unclogging. In this paper, dual controller architecture is presented, a Pid controller is used as the main controller of the plant and an adaptive neuro-fuzzy controller is used as an auxiliary controller to help the Pid during disturbed phases. The control is passed back to the Pid controller after the disturbance is being dealt with. Simulation results prove the effectiveness of this control strategy in reducing mold level variations during the unclogging period

  13. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    International Nuclear Information System (INIS)

    Bui, Thuc

    2007-01-01

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  14. A first accident simulation for Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-02-01

    The acquisition of the Almod computer code from GRS-Munich to CNEN has permited doing calculations of transients in PWR nuclear power plants, in which doesn't occur loss of coolant. The implementation of the german computer code Almod and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation; and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (Author) [pt

  15. Saturated Adaptive Output-Feedback Power-Level Control for Modular High Temperature Gas-Cooled Reactors

    Directory of Open Access Journals (Sweden)

    Zhe Dong

    2014-11-01

    Full Text Available Small modular reactors (SMRs are those nuclear fission reactors with electrical output powers of less than 300 MWe. Due to its inherent safety features, the modular high temperature gas-cooled reactor (MHTGR has been seen as one of the best candidates for building SMR-based nuclear plants with high safety-level and economical competitive power. Power-level control is crucial in providing grid-appropriation for all types of SMRs. Usually, there exists nonlinearity, parameter uncertainty and control input saturation in the SMR-based plant dynamics. Motivated by this, a novel saturated adaptive output-feedback power-level control of the MHTGR is proposed in this paper. This newly-built control law has the virtues of having relatively neat form, of being strong adaptive to parameter uncertainty and of being able to compensate control input saturation, which are given by constructing Lyapunov functions based upon the shifted-ectropies of neutron kinetics and reactor thermal-hydraulics, giving an online tuning algorithm for the controller parameters and proposing a control input saturation compensator respectively. It is proved theoretically that input-to-state stability (ISS can be guaranteed for the corresponding closed-loop system. In order to verify the theoretical results, this new control strategy is then applied to the large-range power maneuvering control for the MHTGR of the HTR-PM plant. Numerical simulation results show not only the relationship between regulating performance and control input saturation bound but also the feasibility of applying this saturated adaptive control law practically.

  16. KEWPIE: A dynamical cascade code for decaying exited compound nuclei

    Science.gov (United States)

    Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David

    2004-05-01

    A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.

  17. Towers of generalized divisible quantum codes

    Science.gov (United States)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  18. Development of standards, codes of practice and guidelines at the national level

    International Nuclear Information System (INIS)

    Swindon, T.N.

    1989-01-01

    Standards, codes of practice and guidelines are defined and their different roles in radiation protection specified. The work of the major bodies that develop such documents in Australia - the National Health and Medical Research Council and the Standards Association of Australia - is discussed. The codes of practice prepared under the Environment Protection (Nuclear Codes) Act, 1978, an act of the Australian Federal Parliament, are described and the guidelines associated with them outlined. 5 refs

  19. Adaptation and selective information transmission in the cricket auditory neuron AN2.

    Directory of Open Access Journals (Sweden)

    Klaus Wimmer

    Full Text Available Sensory systems adapt their neural code to changes in the sensory environment, often on multiple time scales. Here, we report a new form of adaptation in a first-order auditory interneuron (AN2 of crickets. We characterize the response of the AN2 neuron to amplitude-modulated sound stimuli and find that adaptation shifts the stimulus-response curves toward higher stimulus intensities, with a time constant of 1.5 s for adaptation and recovery. The spike responses were thus reduced for low-intensity sounds. We then address the question whether adaptation leads to an improvement of the signal's representation and compare the experimental results with the predictions of two competing hypotheses: infomax, which predicts that information conveyed about the entire signal range should be maximized, and selective coding, which predicts that "foreground" signals should be enhanced while "background" signals should be selectively suppressed. We test how adaptation changes the input-response curve when presenting signals with two or three peaks in their amplitude distributions, for which selective coding and infomax predict conflicting changes. By means of Bayesian data analysis, we quantify the shifts of the measured response curves and also find a slight reduction of their slopes. These decreases in slopes are smaller, and the absolute response thresholds are higher than those predicted by infomax. Most remarkably, and in contrast to the infomax principle, adaptation actually reduces the amount of encoded information when considering the whole range of input signals. The response curve changes are also not consistent with the selective coding hypothesis, because the amount of information conveyed about the loudest part of the signal does not increase as predicted but remains nearly constant. Less information is transmitted about signals with lower intensity.

  20. Motivations for Local Climate Adaptation in Dutch Municipalities: Climate Change Impacts and the Role of Local-Level Government

    NARCIS (Netherlands)

    van den Berg, Maya Marieke

    2009-01-01

    The local government level is considered to be crucial in preparing society for climate change impact. Yet little is known about why local authorities do or do not take action to adapt their community for climate change impacts. In order to implement effective adaptation policy, the motivations for

  1. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  2. Building codes: An often overlooked determinant of health.

    Science.gov (United States)

    Chauvin, James; Pauls, Jake; Strobl, Linda

    2016-05-01

    Although the vast majority of the world's population spends most of their time in buildings, building codes are not often thought of as 'determinants of health'. The standards that govern the design, construction, and use of buildings affect our health, security, safety, and well-being. This is true for dwellings, schools, and universities, shopping centers, places of recreation, places of worship, health-care facilities, and workplaces. We urge proactive engagement by the global public health community in developing these codes, and in the design and implementation of health protection and health promotion activities intended to reduce the risk of injury, disability, and death, particularly when due to poor building code adoption/adaption, application, and enforcement.

  3. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-01-01

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC

  4. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  5. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  6. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  7. COREDAR: COmmunicating Risk of sea level rise and Engaging stakeholDers in framing community based Adaptation stRategies

    Science.gov (United States)

    Amsad Ibrahim Khan, S. K.; Chen, R. S.; de Sherbinin, A. M.; Andimuthu, R.; Kandasamy, P.

    2015-12-01

    Accelerated sea-level rise (SLR) is a major long term outcome of climate change leading to increased inundation of low-lying areas. Particularly, global cities that are located on or near the coasts are often situated in low lying areas and these locations put global cities at greater risk to SLR. Localized flooding will profoundly impact vulnerable communities located in high-risk urban areas. Building community resilience and adapting to SLR is increasingly a high priority for cities. On the other hand, Article 6 of the United Nations Framework Convention on Climate Change addresses the importance of climate change communication and engaging stakeholders in decision making process. Importantly, Community Based Adaptation (CBA) experiences emphasize that it is important to understand a community's unique perceptions of their adaptive capacities to identify useful solutions and that scientific and technical information on anticipated coastal climate impacts needs to be translated into a suitable language and format that allows people to be able to participate in adaptation planning. To address this challenge, this study has put forth three research questions from the lens of urban community engagement in SLR adaptation, (1) What, if any, community engagement in addressing SLR occurring in urban areas; (2) What information do communities need and how does it need to be communicated, in order to be better prepared and have a greater sense of agency? and (3) How can government agencies from city to federal levels facilitate community engagement and action?. To answer these questions this study has evolved a framework "COREDAR" (COmmunicating Risk of sea level rise and Engaging stakeholDers in framing community based Adaptation StRategies) to communicate and transfer complex climate data and information such as projected SLR under different scenarios of IPCC AR5, predicted impact of SLR, prioritizing vulnerability, etc. to concerned stakeholders and local communities

  8. MORSE - E. A new version of the MORSE code

    International Nuclear Information System (INIS)

    Ponti, C.; Heusden, R. van.

    1974-12-01

    This report describes a version of the MORSE code which has been written to facilitate the practical use of this programme. MORSE-E is a ready-to-use version that does not require particular programming efforts to adapt the code to the problem to be solved. It treats source volumes of different geometrical shapes. MORSE-E calculates the flux of particles as the sum of the paths travelled within a given volume; the corresponding relative errors are also provided

  9. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  10. The new Italian code of medical ethics.

    Science.gov (United States)

    Fineschi, V; Turillazzi, E; Cateni, C

    1997-01-01

    In June 1995, the Italian code of medical ethics was revised in order that its principles should reflect the ever-changing relationship between the medical profession and society and between physicians and patients. The updated code is also a response to new ethical problems created by scientific progress; the discussion of such problems often shows up a need for better understanding on the part of the medical profession itself. Medical deontology is defined as the discipline for the study of norms of conduct for the health care professions, including moral and legal norms as well as those pertaining more strictly to professional performance. The aim of deontology is therefore, the in-depth investigation and revision of the code of medical ethics. It is in the light of this conceptual definition that one should interpret a review of the different codes which have attempted, throughout the various periods of Italy's recent history, to adapt ethical norms to particular social and health care climates. PMID:9279746

  11. Models and applications of the UEDGE code

    International Nuclear Information System (INIS)

    Rensink, M.E.; Knoll, D.A.; Porter, G.D.; Rognlien, T.D.; Smith, G.R.; Wising, F.

    1996-09-01

    The transport of particles and energy from the core of a tokamak to nearby material surfaces is an important problem for understanding present experiments and for designing reactor-grade devices. A number of fluid transport codes have been developed to model the plasma in the edge and scrape-off layer (SOL) regions. This report will focus on recent model improvements and illustrative results from the UEDGE code. Some geometric and mesh considerations are introduced, followed by a general description of the plasma and neutral fluid models. A few comments on computational issues are given and then two important applications are illustrated concerning benchmarking and the ITER radiative divertor. Finally, we report on some recent work to improve the models in UEDGE by coupling to a Monte Carlo neutrals code and by utilizing an adaptive grid

  12. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  13. Fifty years of illumination about the natural levels of adaptation

    DEFF Research Database (Denmark)

    Boomsma, Jacobus Jan

    2016-01-01

    A visionary Darwinian ahead of his time, George C. Williams developed in his 1966 book Adaptation and Natural Selection the essentials of a unifying theory of adaptation that remains robust today and has inspired immense progress in understanding how natural selection works.......A visionary Darwinian ahead of his time, George C. Williams developed in his 1966 book Adaptation and Natural Selection the essentials of a unifying theory of adaptation that remains robust today and has inspired immense progress in understanding how natural selection works....

  14. Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning

    International Nuclear Information System (INIS)

    Pin, Francois G.

    2002-01-01

    Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus, there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and

  15. Adaptive Capacity Mapping of Semarang Offshore Territory by the Increasing of Water Level and Climate Change

    Directory of Open Access Journals (Sweden)

    Ifan Ridlo Suhelm

    2013-07-01

    Full Text Available Tidal inundation, flood and land subsidence are the problems faced by Semarang city related to climate change. Intergovernmental Panel on Climate Change (IPCC predicted the increase of sea level rise 18-59 cm during 1990-2100 while the temperature increase 0,6°C to 4°C during the same period. The Semarang coastal city was highly vulnerable to sea level rise and it increased with two factors, topography and land subsidence. The purpose of this study was to map the adaptive capacity of coastal areas in the face of the threat of disasters caused by climate change. The parameters used are Network Number, Employee based educational background, Source Main Livelihoods, Health Facilities, and Infrastructure Road. Adaptive capacity of regions classified into 3 (three classes, namely low, medium and high. The results of the study showed that most of the coastal area of Semarang have adaptive capacities ranging from low to moderate, while the village with low capacity totaling 58 villages (58.62% of the total coastal district in the city of Semarang.

  16. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  17. Real-time range acquisition by adaptive structured light.

    Science.gov (United States)

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  18. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary

  19. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  20. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  1. The first accident simulation of Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-01-01

    The implementation of the german computer code ALMOD and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation, and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (E.G.) [pt

  2. Bilayer Protograph Codes for Half-Duplex Relay Channels

    Science.gov (United States)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive

  3. Adaptive evolution of the matrix extracellular phosphoglycoprotein in mammals

    Directory of Open Access Journals (Sweden)

    Machado João

    2011-11-01

    Full Text Available Abstract Background Matrix extracellular phosphoglycoprotein (MEPE belongs to a family of small integrin-binding ligand N-linked glycoproteins (SIBLINGs that play a key role in skeleton development, particularly in mineralization, phosphate regulation and osteogenesis. MEPE associated disorders cause various physiological effects, such as loss of bone mass, tumors and disruption of renal function (hypophosphatemia. The study of this developmental gene from an evolutionary perspective could provide valuable insights on the adaptive diversification of morphological phenotypes in vertebrates. Results Here we studied the adaptive evolution of the MEPE gene in 26 Eutherian mammals and three birds. The comparative genomic analyses revealed a high degree of evolutionary conservation of some coding and non-coding regions of the MEPE gene across mammals indicating a possible regulatory or functional role likely related with mineralization and/or phosphate regulation. However, the majority of the coding region had a fast evolutionary rate, particularly within the largest exon (1467 bp. Rodentia and Scandentia had distinct substitution rates with an increased accumulation of both synonymous and non-synonymous mutations compared with other mammalian lineages. Characteristics of the gene (e.g. biochemical, evolutionary rate, and intronic conservation differed greatly among lineages of the eight mammalian orders. We identified 20 sites with significant positive selection signatures (codon and protein level outside the main regulatory motifs (dentonin and ASARM suggestive of an adaptive role. Conversely, we find three sites under selection in the signal peptide and one in the ASARM motif that were supported by at least one selection model. The MEPE protein tends to accumulate amino acids promoting disorder and potential phosphorylation targets. Conclusion MEPE shows a high number of selection signatures, revealing the crucial role of positive selection in the

  4. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  5. Importance biasing scheme implemented in the PRIZMA code

    International Nuclear Information System (INIS)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-01-01

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities

  6. Computerized coding system for life narratives to assess students' personality adaption

    NARCIS (Netherlands)

    He, Q.; Veldkamp, B.P.; Westerhof, G.J.; Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John

    2011-01-01

    The present study is a trial in developing an automatic computerized coding framework with text mining techniques to identify the characteristics of redemption and contamination in life narratives written by undergraduate students. In the initial stage of text classification, the keyword-based

  7. How Harmful are Adaptation Restrictions

    OpenAIRE

    Bruin, de, K.C.; Dellink, R.B.

    2009-01-01

    The dominant assumption in economic models of climate policy remains that adaptation will be implemented in an optimal manner. There are, however, several reasons why optimal levels of adaptation may not be attainable. This paper investigates the effects of suboptimal levels of adaptation, i.e. adaptation restrictions, on the composition and level of climate change costs and on welfare. Several adaptation restrictions are identified and then simulated in a revised DICE model, extended with ad...

  8. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  9. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  10. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  11. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  12. Improvement of FLOWER code and its application in Daya Bay

    International Nuclear Information System (INIS)

    Zhang Shaodong; Zhang Yongxing

    1995-01-01

    FLOWER, a computer code recommended by USNRC for assessing the environmental impact in tidal regions, was adapted and improved so as to be suitable to deal with the influence of drift stream along seashore to the dilution of contaminants and heat in the bay mouth. And the code outputs were presented with more mid-results such as average concentrations and temperature values for all tides considered. Finally, the modified code is applied to the dispersion calculation of heat and liquid effluents from Daya Bay Nuclear Power Plant, and the impacts from routine operation of the plant on Daya Bay sea waters were given

  13. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  14. Development of sub-channel/system coupled code and its application to a supercritical water-cooled test loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Yang, T.; Cheng, X.

    2014-01-01

    To analyze the local thermal-hydraulic parameters in the supercritical water reactor-fuel qualification test (SCWR-FQT) fuel bundle with a flow blockage, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code and system code are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal-hydraulic parameters are predicted by the sub-channel code COBRA-SC. Sensitivity analysis are carried out respectively in ATHLET-SC and COBRA-SC code, to identify the appropriate models for description of the flow blockage phenomenon in the test loop. Some measures to mitigate the accident consequence are also trialed to demonstrate their effectiveness. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel assembly can be reduced effectively by the safety measures of SCWR-FQT. (author)

  15. Perceptions, impacts and adaptation of tropical cyclones in the Southwest Pacific: an urban perspective from Fiji, Vanuatu and Tonga

    Science.gov (United States)

    Magee, A. D.; Verdon-Kidd, D. C.; Kiem, A. S.; Royle, S. A.

    2015-11-01

    To better understand perceptions, impacts and adaptation strategies related to tropical cyclones (TCs) in urban environments of the Southwest Pacific (SWP), a survey (with 130 participants) was conducted across three island nations; Fiji, Vanuatu and Tonga. The key aims of this study include: (i) understanding local perceptions of TC activity, (ii) investigating physical impacts of TC activity, and (iii) uncovering adaptation strategies used to offset the impacts of TCs. It was found that current methods of adaptation generally occur at the local level immediately prior to a TC event (preparation of property, gathering of food, setting up of community centres). This method of adaptation appears to be effective, however higher level adaptation measures (such as the development of building codes as developed in Fiji) may reduce vulnerability further. The survey responses also highlight that there is significant scope to provide education programs specifically aimed at improving the understanding of weather related aspects of TCs. Finally, we investigate the potential to merge ecological traditional knowledge with the non-traditional knowledge of empirical and climate mode based weather forecasts to improve forecasting of TCs, which would ultimately reduce vulnerability and increase adaptive capacity.

  16. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Cheng, X.

    2015-01-01

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  17. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)

    2015-04-15

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  18. The use of best estimate codes to improve the simulation in real time

    International Nuclear Information System (INIS)

    Rivero, N.; Esteban, J. A.; Lenhardt, G.

    2007-01-01

    Best estimate codes are assumed to be the technology solution providing the most realistic and accurate response. Best estimate technology provides a complementary solution to the conservative simulation technology usually applied to determine plant safety margins and perform security related studies. Tecnatom in the early 90's, within the MAS project, pioneered the initiative to implement best estimate code in its training simulators. Result of this project was the implementation of the first six-equations thermal hydraulic code worldwide (TRAC R T), running in a training environment. To meet real time and other specific training requirements, it was necessary to overcome important difficulties. Tecnatom has just adapted the Global Nuclear Fuel core Design code: PANAC 11, and is about to complete the General Electric TRACG04 thermal hydraulic code adaptation. This technology features a unique solution for nuclear plants aiming at providing the highest fidelity in simulation, enabling to consider the simulator as a multipurpose: engineering and training, simulation platform. Besides, a visual environment designed to optimize the models life cycle, covering both pre and post-processing activities, is in its late development phase. (Author)

  19. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  20. Coded Shack-Hartmann Wavefront Sensor

    KAUST Repository

    Wang, Congli

    2016-12-01

    Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.

  1. High-resolution multi-code implementation of unsteady Navier-Stokes flow solver based on paralleled overset adaptive mesh refinement and high-order low-dissipation hybrid schemes

    Science.gov (United States)

    Li, Gaohua; Fu, Xiang; Wang, Fuxin

    2017-10-01

    The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.

  2. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  3. Design and Analysis of Self-Healing Tree-Based Hybrid Spectral Amplitude Coding OCDMA System

    Directory of Open Access Journals (Sweden)

    Waqas A. Imtiaz

    2017-01-01

    Full Text Available This paper presents an efficient tree-based hybrid spectral amplitude coding optical code division multiple access (SAC-OCDMA system that is able to provide high capacity transmission along with fault detection and restoration throughout the passive optical network (PON. Enhanced multidiagonal (EMD code is adapted to elevate system’s performance, which negates multiple access interference and associated phase induced intensity noise through efficient two-matrix structure. Moreover, system connection availability is enhanced through an efficient protection architecture with tree and star-ring topology at the feeder and distribution level, respectively. The proposed hybrid architecture aims to provide seamless transmission of information at minimum cost. Mathematical model based on Gaussian approximation is developed to analyze performance of the proposed setup, followed by simulation analysis for validation. It is observed that the proposed system supports 64 subscribers, operating at the data rates of 2.5 Gbps and above. Moreover, survivability and cost analysis in comparison with existing schemes show that the proposed tree-based hybrid SAC-OCDMA system provides the required redundancy at minimum cost of infrastructure and operation.

  4. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  5. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  6. Learning to adapt: Organisational adaptation to climate change impacts

    NARCIS (Netherlands)

    Berkhout, F.G.H.; Hertin, J.; Gann, D.M.

    2006-01-01

    Analysis of human adaptation to climate change should be based on realistic models of adaptive behaviour at the level of organisations and individuals. The paper sets out a framework for analysing adaptation to the direct and indirect impacts of climate change in business organisations with new

  7. Uncertainty in adaptive capacity; Incertitudes dans la capacite d'adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Neil Adger, W.; Vincent, K. [East Anglia Univ., Tyndall Centre for Climate Change Research, School of Environmental Sciences, Norwich (United Kingdom)

    2005-03-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  8. Verification of the CONPAS (CONtainment Performance Analysis System) code package

    International Nuclear Information System (INIS)

    Kim, See Darl; Ahn, Kwang Il; Song, Yong Man; Choi, Young; Park, Soo Yong; Kim, Dong Ha; Jin, Young Ho.

    1997-09-01

    CONPAS is a computer code package to integrate the numerical, graphical, and results-oriented aspects of Level 2 probabilistic safety assessment (PSA) for nuclear power plants under a PC window environment automatically. For the integrated analysis of Level 2 PSA, the code utilizes four distinct, but closely related modules: (1) ET Editor, (2) Computer, (3) Text Editor, and (4) Mechanistic Code Plotter. Compared with other existing computer codes for Level 2 PSA, and CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, sensitivity analysis and data interpretation, reporting aspects including tabling and graphic as well as user-friendly interface. The computational performance of CONPAS has been verified through a Level 2 PSA to a reference plant. The results of the CONPAS code was compared with an existing level 2 PSA code (NUCAP+) and the comparison proves that CONPAS is appropriate for Level 2 PSA. (author). 9 refs., 8 tabs., 14 figs

  9. 3D equilibrium codes for mirror machines

    International Nuclear Information System (INIS)

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  10. Adaptation to Sea Level Rise: A Multidisciplinary Analysis for Ho Chi Minh City, Vietnam

    Science.gov (United States)

    Scussolini, Paolo; Tran, Thi Van Thu; Koks, Elco; Diaz-Loaiza, Andres; Ho, Phi Long; Lasage, Ralph

    2017-12-01

    One of the most critical impacts of sea level rise is that flooding suffered by ever larger settlements in tropical deltas will increase. Here we look at Ho Chi Minh City, Vietnam, and quantify the threats that coastal floods pose to safety and to the economy. For this, we produce flood maps through hydrodynamic modeling and, by combining these with data sets of exposure and vulnerability, we estimate two indicators of risk: the damage to assets and the number of potential casualties. We simulate current and future (2050 and 2100) flood risk using IPCC scenarios of sea level rise and socioeconomic change. We find that annual damage may grow by more than 1 order of magnitude, and potential casualties may grow 5-20-fold until the end of the century, in the absence of adaptation. Impacts depend strongly on the climate and socioeconomic scenarios considered. Next, we simulate the implementation of adaptation measures and calculate their effectiveness in reducing impacts. We find that a ring dike would protect the inner city but increase risk in more rural districts, whereas elevating areas at risk and dryproofing buildings will reduce impacts to the city as a whole. Most measures perform well from an economic standpoint. Combinations of measures seem to be the optimal solution and may address potential equity conflicts. Based on our results, we design possible adaptation pathways for Ho Chi Minh City for the coming decades; these can inform policy-making and strategic thinking.

  11. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    International Nuclear Information System (INIS)

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  12. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2 (RATCHET2)

    International Nuclear Information System (INIS)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-01-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  13. Regional Atmospheric Transport Code for Hanford Emission Tracking, Version 2(RATCHET2)

    Energy Technology Data Exchange (ETDEWEB)

    Ramsdell, James V.; Rishel, Jeremy P.

    2006-07-01

    This manual describes the atmospheric model and computer code for the Atmospheric Transport Module within SAC. The Atmospheric Transport Module, called RATCHET2, calculates the time-integrated air concentration and surface deposition of airborne contaminants to the soil. The RATCHET2 code is an adaptation of the Regional Atmospheric Transport Code for Hanford Emissions Tracking (RATCHET). The original RATCHET code was developed to perform the atmospheric transport for the Hanford Environmental Dose Reconstruction Project. Fundamentally, the two sets of codes are identical; no capabilities have been deleted from the original version of RATCHET. Most modifications are generally limited to revision of the run-specification file to streamline the simulation process for SAC.

  14. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  15. HIV-1 Adaptation to Antigen Processing Results in Population-Level Immune Evasion and Affects Subtype Diversification

    DEFF Research Database (Denmark)

    Tenzer, Stefan; Crawford, Hayley; Pymm, Phillip

    2014-01-01

    these regions encode epitopes presented by ~30 more common HLA variants. By combining epitope processing and computational analyses of the two HIV subtypes responsible for ~60% of worldwide infections, we identified a hitherto unrecognized adaptation to the antigen-processing machinery through substitutions...... of intrapatient adaptations, is predictable, facilitates viral subtype diversification, and increases global HIV diversity. Because low epitope abundance is associated with infrequent and weak T cell responses, this most likely results in both population-level immune evasion and inadequate responses in most...

  16. Manchester Coding Option for SpaceWire: Providing Choices for System Level Design

    Science.gov (United States)

    Rakow, Glenn; Kisin, Alex

    2014-01-01

    This paper proposes an optional coding scheme for SpaceWire in lieu of the current Data Strobe scheme for three reasons. First reason is to provide a straightforward method for electrical isolation of the interface; secondly to provide ability to reduce the mass and bend radius of the SpaceWire cable; and thirdly to provide a means for a common physical layer over which multiple spacecraft onboard data link protocols could operate for a wide range of data rates. The intent is to accomplish these goals without significant change to existing SpaceWire design investments. The ability to optionally use Manchester coding in place of the current Data Strobe coding provides the ability to DC balanced the signal transitions unlike the SpaceWire Data Strobe coding; and therefore the ability to isolate the electrical interface without concern. Additionally, because the Manchester code has the clock and data encoded on the same signal, the number of wires of the existing SpaceWire cable could be optionally reduced by 50. This reduction could be an important consideration for many users of SpaceWire as indicated by the already existing effort underway by the SpaceWire working group to reduce the cable mass and bend radius by elimination of shields. However, reducing the signal count by half would provide even greater gains. It is proposed to restrict the data rate for the optional Manchester coding to a fixed data rate of 10 Megabits per second (Mbps) in order to make the necessary changes simple and still able to run in current radiation tolerant Field Programmable Gate Arrays (FPGAs). Even with this constraint, 10 Mbps will meet many applications where SpaceWire is used. These include command and control applications and many instruments applications with have moderate data rate. For most NASA flight implementations, SpaceWire designs are in rad-tolerant FPGAs, and the desire to preserve the heritage design investment is important for cost and risk considerations. The

  17. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    Science.gov (United States)

    Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul

    2015-11-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.

  18. Benefit of adaptive FEC in shared backup path protected elastic optical network.

    Science.gov (United States)

    Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang

    2015-07-27

    We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.

  19. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  20. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  1. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  2. The PSACOIN level 1B exercise: A probabilistic code intercomparison involving a four compartment biosphere model

    International Nuclear Information System (INIS)

    Klos, R.A.; Sinclair, J.E.; Torres, C.; Mobbs, S.F.; Galson, D.A.

    1991-01-01

    The probabilistic Systems Assessment Code (PSAC) User Group of the OECD Nuclear Energy Agency has organised a series of code intercomparison studies of relevance to the performance assessment of underground repositories for radioactive wastes - known collectively by the name PSACOIN. The latest of these to be undertaken is designated PSACOIN Level 1b, and the case specification provides a complete assessment model of the behaviour of radionuclides following release into the biosphere. PSACOIN Level 1b differs from other biosphere oriented intercomparison exercises in that individual dose is the end point of the calculations as opposed to any other intermediate quantity. The PSACOIN Level 1b case specification describes a simple source term which is used to simulate the release of activity to the biosphere from certain types of near surface waste repository, the transport of radionuclides through the biosphere and their eventual uptake by humankind. The biosphere sub model comprises 4 compartments representing top and deep soil layers, river water and river sediment. The transport of radionuclides between the physical compartments is described by ten transfer coefficients and doses to humankind arise from the simultaneous consumption of water, fish, meat, milk, and grain as well as from dust inhalation and external γ-irradiation. The parameters of the exposure pathway sub model are chosen to be representative of an individual living in a small agrarian community. (13 refs., 3 figs., 2 tabs.)

  3. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.

    2017-07-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.

  4. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  5. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  6. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  7. CONSUL code package application for LMFR core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Chibinyaev, A.V.; Teplov, P.S.; Frolova, M.V. [RNC ' Kurchatovskiy institute' , Kurchatov sq.1, Moscow (Russian Federation)

    2008-07-01

    CONSUL code package designed for the calculation of reactor core characteristics has been developed at the beginning of 90's. The calculation of nuclear reactor core characteristics is carried out on the basis of correlated neutron, isotope and temperature distributions. The code package has been generally used for LWR core characteristics calculations. At present CONSUL code package was adapted to calculate liquid metal fast reactors (LMFR). The comparisons with IAEA computational test 'Evaluation of benchmark calculations on a fast power reactor core with near zero sodium void effect' and BN-1800 testing calculations are presented in the paper. The IAEA benchmark core is based on the innovative core concept with sodium plenum above the core BN-800. BN-1800 core is the next development step which is foreseen for the Russian fast reactor concept. The comparison of the operational parameters has shown good agreement and confirms the possibility of CONSUL code package application for LMFR core calculation. (authors)

  8. General features of the neutronics design code EQUICYCLE

    International Nuclear Information System (INIS)

    Jirlow, K.

    1978-10-01

    The neutronics code EQUICYCLE has been developed and improved over a long period of time. It is expecially adapted to survey type design calculations of large fast power reactors with particular emphasis on the nuclear parameters for a realistic equilibrium fuel cycle. Thus the code is used to evaluate the breeding performance, the power distributions and the uranium and plutonium mass balance for realistic refuelling schemes. In addition reactivity coefficients can be calculated and the influence of burnup could be assessed. The code is two-dimensional and treats the reactor core in R-Z geometry. The basic ideas of the calculating scheme are successive iterative improvement of cross-section sets and flux spectra and use of the mid-cycle flux for burning the fuel according to a specified refuelling scheme. Normally given peak burn-ups and maximum power densities are used as boundary conditions. The code is capable of handling the unconventional, so called heterogeneous cores. (author)

  9. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  10. Adaptation of GRS calculation codes for Soviet reactors

    International Nuclear Information System (INIS)

    Langenbuch, S.; Petri, A.; Steinborn, J.; Stenbok, I.A.; Suslow, A.I.

    1994-01-01

    The use of ATHLET for incident calculation of WWER has been tested and verified in numerous calculations. Further adaptation may be needed for the WWER 1000 plants. Coupling ATHLET with the 3D nuclear model BIPR-8 for WWER cores clearly improves studies of the influence of neutron kinetics. In the case of FBMK reactors ATHLET calculations show that typical incidents in the complex RMBK reactors can be calculated even though verification still has to be worked on. Results of the 3D-core model QUABOX/CUBBOX-HYCA show good correlation of calculated and measured values in reactor plants. Calculations carried out to date were used to check essential parameters influencing RBMK core behaviour especially dependence of effective voidre activity on the number of control rods. (orig./HP) [de

  11. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  12. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  13. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  14. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  15. Face adaptation improves gender discrimination.

    Science.gov (United States)

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Learning to Adapt. Organisational Adaptation to Climate Change Impacts

    International Nuclear Information System (INIS)

    Berkhout, F.; Hertin, J.; Gann, D.M.

    2006-01-01

    Analysis of human adaptation to climate change should be based on realistic models of adaptive behaviour at the level of organisations and individuals. The paper sets out a framework for analysing adaptation to the direct and indirect impacts of climate change in business organisations with new evidence presented from empirical research into adaptation in nine case-study companies. It argues that adaptation to climate change has many similarities with processes of organisational learning. The paper suggests that business organisations face a number of obstacles in learning how to adapt to climate change impacts, especially in relation to the weakness and ambiguity of signals about climate change and the uncertainty about benefits flowing from adaptation measures. Organisations rarely adapt 'autonomously', since their adaptive behaviour is influenced by policy and market conditions, and draws on resources external to the organisation. The paper identifies four adaptation strategies that pattern organisational adaptive behaviour

  17. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  18. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  19. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  20. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.