WorldWideScience

Sample records for level adaptive coding

  1. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  2. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  3. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  4. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  5. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  6. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  7. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  8. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  9. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  10. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  11. Adaptive RAC codes employing statistical channel evaluation ...

    African Journals Online (AJOL)

    An adaptive encoding technique using row and column array (RAC) codes employing a different number of parity columns that depends on the channel state is proposed in this paper. The trellises of the proposed adaptive codes and a statistical channel evaluation technique employing these trellises are designed and ...

  12. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  13. Context quantization by minimum adaptive code length

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, Xiaolin

    2007-01-01

    Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....

  14. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  15. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  16. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  17. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    Science.gov (United States)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  18. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  19. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz

    2015-09-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed-form solutions for the energy-limited optimal power allocation and investigate the diversity gain of different STC-ARQ schemes. In addition, sufficient conditions are derived for the usefulness of ARQ in terms of energy-limited outage probability. The results show that, for a large range of feedback costs, the energy efficiency is substantially improved by the combination of ARQ and STC techniques if optimal power allocation is utilized. © 2014 IEEE.

  20. Intrinsic gain modulation and adaptive neural coding.

    Directory of Open Access Journals (Sweden)

    Sungho Hong

    2008-07-01

    Full Text Available In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate versus current (f-I curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.

  1. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  2. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  3. Feature-level domain adaptation

    DEFF Research Database (Denmark)

    Kouw, Wouter M.; Van Der Maaten, Laurens J P; Krijthe, Jesse H.

    2016-01-01

    -level domain adaptation (flda), that models the dependence between the two domains by means of a feature-level transfer model that is trained to describe the transfer from source to target domain. Subsequently, we train a domain-adapted classifier by minimizing the expected loss under the resulting transfer...... modeled via a dropout distribution, which allows the classiffier to adapt to differences in the marginal probability of features in the source and the target domain. Our experiments on several real-world problems show that flda performs on par with state-of-the-art domainadaptation techniques.......Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature...

  4. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  5. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  6. A multiobjective approach to the genetic code adaptability problem.

    Science.gov (United States)

    de Oliveira, Lariza Laura; de Oliveira, Paulo S L; Tinós, Renato

    2015-02-19

    The organization of the canonical code has intrigued researches since it was first described. If we consider all codes mapping the 64 codes into 20 amino acids and one stop codon, there are more than 1.51×10(84) possible genetic codes. The main question related to the organization of the genetic code is why exactly the canonical code was selected among this huge number of possible genetic codes. Many researchers argue that the organization of the canonical code is a product of natural selection and that the code's robustness against mutations would support this hypothesis. In order to investigate the natural selection hypothesis, some researches employ optimization algorithms to identify regions of the genetic code space where best codes, according to a given evaluation function, can be found (engineering approach). The optimization process uses only one objective to evaluate the codes, generally based on the robustness for an amino acid property. Only one objective is also employed in the statistical approach for the comparison of the canonical code with random codes. We propose a multiobjective approach where two or more objectives are considered simultaneously to evaluate the genetic codes. In order to test our hypothesis that the multiobjective approach is useful for the analysis of the genetic code adaptability, we implemented a multiobjective optimization algorithm where two objectives are simultaneously optimized. Using as objectives the robustness against mutation with the amino acids properties polar requirement (objective 1) and robustness with respect to hydropathy index or molecular volume (objective 2), we found solutions closer to the canonical genetic code in terms of robustness, when compared with the results using only one objective reported by other authors. Using more objectives, more optimal solutions are obtained and, as a consequence, more information can be used to investigate the adaptability of the genetic code. The multiobjective approach

  7. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  8. Adaptive Modulation and Coding for LTE Wireless Communication

    Science.gov (United States)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  9. Satellite Media Broadcasting with Adaptive Coding and Modulation

    Directory of Open Access Journals (Sweden)

    Georgios Gardikis

    2009-01-01

    Full Text Available Adaptive Coding and Modulation (ACM is a feature incorporated into the DVB-S2 satellite specification, allowing real-time adaptation of transmission parameters according to the link conditions. Although ACM was originally designed for optimizing unicast services, this article discusses the expansion of its usage to broadcasting streams as well. For this purpose, a general cross-layer adaptation approach is proposed, along with its realization into a fully functional experimental network, and test results are presented. Finally, two case studies are analysed, assessing the gain derived by ACM in a real large-scale deployment, involving HD services provision to two different geographical areas.

  10. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  11. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  12. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  13. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  14. Adaptation of radiation shielding code to space environment

    International Nuclear Information System (INIS)

    Okuno, Koichi; Hara, Akihisa

    1992-01-01

    Recently, the trend to the development of space has heightened. To the development of space, many problems are related, and as one of them, there is the protection from cosmic ray. The cosmic ray is the radiation having ultrahigh energy, and there was not the radiation shielding design code that copes with cosmic ray so far. Therefore, the high energy radiation shielding design code for accelerators was improved so as to cope with the peculiarity that cosmic ray possesses. Moreover, the calculation of the radiation dose equivalent rate in the moon base to which the countermeasures against cosmic ray were taken was simulated by using the improved code. As the important countermeasures for the safety protection from radiation, the covering with regolith is carried out, and the effect of regolith was confirmed by using the improved code. Galactic cosmic ray, solar flare particles, radiation belt, the adaptation of the radiation shielding code HERMES to space environment, the improvement of the three-dimensional hadron cascade code HETCKFA-2 and the electromagnetic cascade code EGS 4-KFA, and the cosmic ray simulation are reported. (K.I.)

  15. The FORTRAN NALAP code adapted to a microcomputer compiler

    International Nuclear Information System (INIS)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso

    2010-01-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  16. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  17. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  18. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  19. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    Science.gov (United States)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  20. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  1. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  2. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  3. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  4. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  5. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  6. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  7. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  8. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  9. Climate Adaptation and Sea Level Rise

    Science.gov (United States)

    EPA supports the development and maintenance of water utility infrastructure across the country. Included in this effort is helping the nation’s water utilities anticipate, plan for, and adapt to risks from flooding, sea level rise, and storm surge.

  10. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  11. Decision-level adaptation in motion perception.

    Science.gov (United States)

    Mather, George; Sharman, Rebecca J

    2015-12-01

    Prolonged exposure to visual stimuli causes a bias in observers' responses to subsequent stimuli. Such adaptation-induced biases are usually explained in terms of changes in the relative activity of sensory neurons in the visual system which respond selectively to the properties of visual stimuli. However, the bias could also be due to a shift in the observer's criterion for selecting one response rather than the alternative; adaptation at the decision level of processing rather than the sensory level. We investigated whether adaptation to implied motion is best attributed to sensory-level or decision-level bias. Three experiments sought to isolate decision factors by changing the nature of the participants' task while keeping the sensory stimulus unchanged. Results showed that adaptation-induced bias in reported stimulus direction only occurred when the participants' task involved a directional judgement, and disappeared when adaptation was measured using a non-directional task (reporting where motion was present in the display, regardless of its direction). We conclude that adaptation to implied motion is due to decision-level bias, and that a propensity towards such biases may be widespread in sensory decision-making.

  12. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  13. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  14. Adaptive distributed video coding with correlation estimation using expectation propagation

    Science.gov (United States)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  15. Multiplexed Spike Coding and Adaptation in the Thalamus

    Directory of Open Access Journals (Sweden)

    Rebecca A. Mease

    2017-05-01

    Full Text Available High-frequency “burst” clusters of spikes are a generic output pattern of many neurons. While bursting is a ubiquitous computational feature of different nervous systems across animal species, the encoding of synaptic inputs by bursts is not well understood. We find that bursting neurons in the rodent thalamus employ “multiplexing” to differentially encode low- and high-frequency stimulus features associated with either T-type calcium “low-threshold” or fast sodium spiking events, respectively, and these events adapt differently. Thus, thalamic bursts encode disparate information in three channels: (1 burst size, (2 burst onset time, and (3 precise spike timing within bursts. Strikingly, this latter “intraburst” encoding channel shows millisecond-level feature selectivity and adapts across statistical contexts to maintain stable information encoded per spike. Consequently, calcium events both encode low-frequency stimuli and, in parallel, gate a transient window for high-frequency, adaptive stimulus encoding by sodium spike timing, allowing bursts to efficiently convey fine-scale temporal information.

  16. Adaptive response after low level irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Pelevina, I I; Afanasjev, G G; JaGotlib, V; Tereschenko, D G; Tronov, V A; Serebrjany, A M [Russian Academy of Sciences, Moscow (Russian Federation). Institute of Chemical Physics

    1996-02-01

    The experiments conducted on cultured HeLa (tissue culture) cells revealed that there is a limit of dose above which adaptive response was not observed and a limit of dose below which this response was not induced. The exposure of cells in the territories with elevated radiation background leads to genome instability which results in enhanced radiosensitivity. Investigations on the blood lymphocytes of people living in contaminated regions revealed that adaptive response was more significant in children whereas in adults there was slight increase. Acute irradiation serves as a tool revealing the changes that took place in DNA during chronic low level irradiations after Chernobyl disaster. (author).

  17. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  18. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  19. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  20. Adaptation of HAMMER computer code to CYBER 170/750 computer

    International Nuclear Information System (INIS)

    Pinheiro, A.M.B.S.; Nair, R.P.K.

    1982-01-01

    The adaptation of HAMMER computer code to CYBER 170/750 computer is presented. The HAMMER code calculates cell parameters by multigroup transport theory and reactor parameters by few group diffusion theory. The auxiliary programs, the carried out modifications and the use of HAMMER system adapted to CYBER 170/750 computer are described. (M.C.K.) [pt

  1. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  2. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  3. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  4. Normalized value coding explains dynamic adaptation in the human valuation process.

    Science.gov (United States)

    Khaw, Mel W; Glimcher, Paul W; Louie, Kenway

    2017-11-28

    The notion of subjective value is central to choice theories in ecology, economics, and psychology, serving as an integrated decision variable by which options are compared. Subjective value is often assumed to be an absolute quantity, determined in a static manner by the properties of an individual option. Recent neurobiological studies, however, have shown that neural value coding dynamically adapts to the statistics of the recent reward environment, introducing an intrinsic temporal context dependence into the neural representation of value. Whether valuation exhibits this kind of dynamic adaptation at the behavioral level is unknown. Here, we show that the valuation process in human subjects adapts to the history of previous values, with current valuations varying inversely with the average value of recently observed items. The dynamics of this adaptive valuation are captured by divisive normalization, linking these temporal context effects to spatial context effects in decision making as well as spatial and temporal context effects in perception. These findings suggest that adaptation is a universal feature of neural information processing and offer a unifying explanation for contextual phenomena in fields ranging from visual psychophysics to economic choice.

  5. An Adaptive Coding Scheme For Effective Bandwidth And Power ...

    African Journals Online (AJOL)

    Codes for communication channels are in most cases chosen on the basis of the signal to noise ratio expected on a given transmission channel. The worst possible noise condition is normally assumed in the choice of appropriate codes such that a specified minimum error shall result during transmission on the channel.

  6. Italian Adaptation of the "Autonomy and Relatedness Coding System"

    Directory of Open Access Journals (Sweden)

    Sonia Ingoglia

    2013-08-01

    Full Text Available The study examined the applicability of the observational technique developed by Allen and colleagues (Allen, Hauser, Bell, & O’Connor, 1994; Allen, Hauser, et al., 2003 to investigate the issues of autonomy and relatedness in parent-adolescent relationship in the Italian context. Thirty-five mother-adolescent dyads participated to a task in which they discussed a family issue about which they disagree. Adolescents were also administered a self-report measure assessing their relationship with mothers. Mothers reported significantly higher levels of promoting and inhibiting autonomy, and promoting relatedness behaviors than their children. Results also suggested a partial behavioral reciprocity within the dyads, regarding promoting and inhibiting relatedness, and inhibiting autonomy. Finally, mothers’ inhibiting autonomy behaviors positively correlated to teens’ perception of their relationship as conflicting; adolescents’ inhibiting and promoting autonomy and inhibiting relatedness behaviors positively correlated to open confrontation, rejection and coolness, while promoting relatedness behaviors negatively correlated to open confrontation, rejection and coolness. The results suggest that, for Italian mothers, behaviors linked to autonomy seem to be associated with being involved in a more negative relationship with their children, even if not characterized by open hostility, while for Italian adolescents, behaviors linked to autonomy seem to be associated with threatening the closeness of the relationship. Globally, the findings suggest that the application of this observational procedure may help our understanding of youth autonomy and relatedness development in Italy, but they leave unanswered questions regarding its appropriate adaptation and the role played by cultural differences.

  7. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  8. Research on coding and decoding method for digital levels

    Energy Technology Data Exchange (ETDEWEB)

    Tu Lifen; Zhong Sidong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.

  9. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  10. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  11. National-level progress on adaptation

    NARCIS (Netherlands)

    Lesnikowski, A.; Ford-Robertson, J.; Biesbroek, G.R.; Berrang-Ford, L.; Heymann, S.J.

    2016-01-01

    It is increasingly evident that adaptation will figure prominently in the post-2015 United Nations climate change agreement. As adaptation obligations under the United Nations Framework Convention on Climate Change evolve, more rigorous approaches to measuring adaptation progress among parties will

  12. Adaptive Relay Activation in the Network Coding Protocols

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2015-01-01

    State-of-the-art Network coding based routing protocols exploit the link quality information to compute the transmission rate in the intermediate nodes. However, the link quality discovery protocols are usually inaccurate, and introduce overhead in wireless mesh networks. In this paper, we presen...

  13. Adaptive antenna array algorithms and their impact on code division ...

    African Journals Online (AJOL)

    In this paper four each blind adaptive array algorithms are developed, and their performance under different test situations (e.g. A WGN (Additive White Gaussian Noise) channel, and multipath environment) is studied A MATLAB test bed is created to show their performance on these two test situations and an optimum one ...

  14. Use of sensitivity-information for the adaptive simulation of thermo-hydraulic system codes

    International Nuclear Information System (INIS)

    Kerner, Alexander M.

    2011-01-01

    Within the scope of this thesis the development of methods for online-adaptation of dynamical plant simulations of a thermal-hydraulic system code to measurement data is depicted. The described approaches are mainly based on the use of sensitivity-information in different areas: statistical sensitivity measures are used for the identification of the parameters to be adapted and online-sensitivities for the parameter adjustment itself. For the parameter adjustment the method of a ''system-adapted heuristic adaptation with partial separation'' (SAHAT) was developed, which combines certain variants of parameter estimation and control with supporting procedures to solve the basic problems. The applicability of the methods is shown by adaptive simulations of a PKL-III experiment and by selected transients in a nuclear power plant. Finally the main perspectives for the application of a tracking simulator on a system code are identified.

  15. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  16. Adaptation of GRS calculation codes for Soviet reactors

    International Nuclear Information System (INIS)

    Langenbuch, S.; Petri, A.; Steinborn, J.; Stenbok, I.A.; Suslow, A.I.

    1994-01-01

    The use of ATHLET for incident calculation of WWER has been tested and verified in numerous calculations. Further adaptation may be needed for the WWER 1000 plants. Coupling ATHLET with the 3D nuclear model BIPR-8 for WWER cores clearly improves studies of the influence of neutron kinetics. In the case of FBMK reactors ATHLET calculations show that typical incidents in the complex RMBK reactors can be calculated even though verification still has to be worked on. Results of the 3D-core model QUABOX/CUBBOX-HYCA show good correlation of calculated and measured values in reactor plants. Calculations carried out to date were used to check essential parameters influencing RBMK core behaviour especially dependence of effective voidre activity on the number of control rods. (orig./HP) [de

  17. Adaptive modeling of sky for video processing and coding applications

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  18. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    Science.gov (United States)

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  19. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  20. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  1. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  2. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  3. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  4. Supporting Dynamic Adaptive Streaming over HTTP in Wireless Meshed Networks using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies...

  5. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  6. Improvement of level-1 PSA computer code package

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  7. Improvement of level-1 PSA computer code package

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The improvement of level-1 PSA Computer Codes' is divided into two main activities : 1) improvement of level-1 PSA methodology, 2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs

  8. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  9. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. The Calculation of Flooding Level using CFX Code

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Kim, Keon Yeop; Lee, Hyung Ho

    2015-01-01

    The plant design should consider internal flooding by postulated pipe ruptures, component failures, actuation of spray systems, and improper system alignment. The flooding causes failure of safety-related equipment and affects the integrity of the structure. The safety-related equipment should be installed above the flood level for protection against flooding effects. Conservative estimates of the flood level are important when a DBA occurs. The flooding level can be calculated simply applying Bernoulli's equation. However, in this study, a realistic calculation is performed with ANSYS CFX code. In calculation with CFX, air-core vortex phenomena, and turbulent flow can be simulated, which cannot be calculated analytically. The flooding level is evaluated by analytical calculation and CFX analysis for an assumed condition. The flood level is calculated as 0.71m and 1.1m analytically and with CFX simulation, respectively. Comparing the analytical calculation and simulation, they are similar, but the analytical calculation is not conservative. There are many factors reducing the drainage capacity such as air-core vortex, intake of air, and turbulent flow. Therefore, in case of flood level evaluation by analytical calculation, a sufficient safety margin should be considered

  11. Coastal Sea Levels, Impacts, and Adaptation

    Directory of Open Access Journals (Sweden)

    Thomas Wahl

    2018-02-01

    Full Text Available Sea-level rise (SLR poses a great threat to approximately 10% of the world’s population residing in low-elevation coastal zones (i.e., land located up to 10 m of present-day mean sea-level (MSL[...

  12. Adaptation in Coding by Large Populations of Neurons in the Retina

    Science.gov (United States)

    Ioffe, Mark L.

    A comprehensive theory of neural computation requires an understanding of the statistical properties of the neural population code. The focus of this work is the experimental study and theoretical analysis of the statistical properties of neural activity in the tiger salamander retina. This is an accessible yet complex system, for which we control the visual input and record from a substantial portion--greater than a half--of the ganglion cell population generating the spiking output. Our experiments probe adaptation of the retina to visual statistics: a central feature of sensory systems which have to adjust their limited dynamic range to a far larger space of possible inputs. In Chapter 1 we place our work in context with a brief overview of the relevant background. In Chapter 2 we describe the experimental methodology of recording from 100+ ganglion cells in the tiger salamander retina. In Chapter 3 we first present the measurements of adaptation of individual cells to changes in stimulation statistics and then investigate whether pairwise correlations in fluctuations of ganglion cell activity change across different stimulation conditions. We then transition to a study of the population-level probability distribution of the retinal response captured with maximum-entropy models. Convergence of the model inference is presented in Chapter 4. In Chapter 5 we first test the empirical presence of a phase transition in such models fitting the retinal response to different experimental conditions, and then proceed to develop other characterizations which are sensitive to complexity in the interaction matrix. This includes an analysis of the dynamics of sampling at finite temperature, which demonstrates a range of subtle attractor-like properties in the energy landscape. These are largely conserved when ambient illumination is varied 1000-fold, a result not necessarily apparent from the measured low-order statistics of the distribution. Our results form a consistent

  13. Overall simulation of a HTGR plant with the gas adapted MANTA code

    International Nuclear Information System (INIS)

    Emmanuel Jouet; Dominique Petit; Robert Martin

    2005-01-01

    Full text of publication follows: AREVA's subsidiary Framatome ANP is developing a Very High Temperature Reactor nuclear heat source that can be used for electricity generation as well as cogeneration including hydrogen production. The selected product has an indirect cycle architecture which is easily adapted to all possible uses of the nuclear heat source. The coupling to the applications is implemented through an Intermediate Heat exchanger. The system code chosen to calculate the steady-state and transient behaviour of the plant is based on the MANTA code. The flexible and modular MANTA code that is originally a system code for all non LOCA PWR plant transients, has been the subject of new developments to simulate all the forced convection transients of a nuclear plant with a gas cooled High Temperature Reactor including specific core thermal hydraulics and neutronics modelizations, gas and water steam turbomachinery and control structure. The gas adapted MANTA code version is now able to model a total HTGR plant with a direct Brayton cycle as well as indirect cycles. To validate these new developments, a modelization with the MANTA code of a real plant with direct Brayton cycle has been performed and steady-states and transients compared with recorded thermal hydraulic measures. Finally a comparison with the RELAP5 code has been done regarding transient calculations of the AREVA indirect cycle HTR project plant. Moreover to improve the user-friendliness in order to use MANTA as a systems conception, optimization design tool as well as a plant simulation tool, a Man- Machine-Interface is available. Acronyms: MANTA Modular Advanced Neutronic and Thermal hydraulic Analysis; HTGR High Temperature Gas-Cooled Reactor. (authors)

  14. Analysis of ASTEC code adaptability to severe accident simulation for CANDU type reactors

    International Nuclear Information System (INIS)

    Constantin, Marin; Rizoiu, Andrei

    2008-01-01

    In order to prepare the adaptation of the ASTEC code to CANDU NPP severe accident analysis two kinds of activities were performed: - analyses of the ASTEC modules from the point of view of models and options, followed by CANDU exploratory calculation for the appropriate modules/models; - preparing the specifications for ASTEC adaptation for CANDU NPP. The paper is structured in three parts: - a comparison of PWR and CANDU concepts (from the point of view of severe accident phenomena); - exploratory calculations with some ASTEC modules- SOPHAEROS, CPA, IODE, CESAR, DIVA - for CANDU type reactors specific problems; - development needs analysis - algorithms, methods, modules. (authors)

  15. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  16. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  17. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  18. New adaptive differencing strategy in the PENTRAN 3-d parallel Sn code

    International Nuclear Information System (INIS)

    Sjoden, G.E.; Haghighat, A.

    1996-01-01

    It is known that three-dimensional (3-D) discrete ordinates (S n ) transport problems require an immense amount of storage and computational effort to solve. For this reason, parallel codes that offer a capability to completely decompose the angular, energy, and spatial domains among a distributed network of processors are required. One such code recently developed is PENTRAN, which iteratively solves 3-D multi-group, anisotropic S n problems on distributed-memory platforms, such as the IBM-SP2. Because large problems typically contain several different material zones with various properties, available differencing schemes should automatically adapt to the transport physics in each material zone. To minimize the memory and message-passing overhead required for massively parallel S n applications, available differencing schemes in an adaptive strategy should also offer reasonable accuracy and positivity, yet require only the zeroth spatial moment of the transport equation; differencing schemes based on higher spatial moments, in spite of their greater accuracy, require at least twice the amount of storage and communication cost for implementation in a massively parallel transport code. This paper discusses a new adaptive differencing strategy that uses increasingly accurate schemes with low parallel memory and communication overhead. This strategy, implemented in PENTRAN, includes a new scheme, exponential directional averaged (EDA) differencing

  19. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  20. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  1. Adaptation of Zerotrees Using Signed Binary Digit Representations for 3D Image Coding

    Directory of Open Access Journals (Sweden)

    Mailhes Corinne

    2007-01-01

    Full Text Available Zerotrees of wavelet coefficients have shown a good adaptability for the compression of three-dimensional images. EZW, the original algorithm using zerotree, shows good performance and was successfully adapted to 3D image compression. This paper focuses on the adaptation of EZW for the compression of hyperspectral images. The subordinate pass is suppressed to remove the necessity to keep the significant pixels in memory. To compensate the loss due to this removal, signed binary digit representations are used to increase the efficiency of zerotrees. Contextual arithmetic coding with very limited contexts is also used. Finally, we show that this simplified version of 3D-EZW performs almost as well as the original one.

  2. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    International Nuclear Information System (INIS)

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  3. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  4. Computer codes for level 1 probabilistic safety assessment

    International Nuclear Information System (INIS)

    1990-06-01

    Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs

  5. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  6. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Directory of Open Access Journals (Sweden)

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  7. An adaptation model for trabecular bone at different mechanical levels

    Directory of Open Access Journals (Sweden)

    Lv Linwei

    2010-07-01

    Full Text Available Abstract Background Bone has the ability to adapt to mechanical usage or other biophysical stimuli in terms of its mass and architecture, indicating that a certain mechanism exists for monitoring mechanical usage and controlling the bone's adaptation behaviors. There are four zones describing different bone adaptation behaviors: the disuse, adaptation, overload, and pathologic overload zones. In different zones, the changes of bone mass, as calculated by the difference between the amount of bone formed and what is resorbed, should be different. Methods An adaptation model for the trabecular bone at different mechanical levels was presented in this study based on a number of experimental observations and numerical algorithms in the literature. In the proposed model, the amount of bone formation and the probability of bone remodeling activation were proposed in accordance with the mechanical levels. Seven numerical simulation cases under different mechanical conditions were analyzed as examples by incorporating the adaptation model presented in this paper with the finite element method. Results The proposed bone adaptation model describes the well-known bone adaptation behaviors in different zones. The bone mass and architecture of the bone tissue within the adaptation zone almost remained unchanged. Although the probability of osteoclastic activation is enhanced in the overload zone, the potential of osteoblasts to form bones compensate for the osteoclastic resorption, eventually strengthening the bones. In the disuse zone, the disuse-mode remodeling removes bone tissue in disuse zone. Conclusions The study seeks to provide better understanding of the relationships between bone morphology and the mechanical, as well as biological environments. Furthermore, this paper provides a computational model and methodology for the numerical simulation of changes of bone structural morphology that are caused by changes of mechanical and biological

  8. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Science.gov (United States)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  9. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    Energy Technology Data Exchange (ETDEWEB)

    Anninos, Peter; Lau, Cheuk [Lawrence Livermore National Laboratory, P.O. Box 808, Livermore, CA 94550 (United States); Bryant, Colton [Department of Engineering Sciences and Applied Mathematics, Northwestern University, 2145 Sheridan Road, Evanston, Illinois, 60208 (United States); Fragile, P. Chris [Department of Physics and Astronomy, College of Charleston, 66 George Street, Charleston, SC 29424 (United States); Holgado, A. Miguel [Department of Astronomy and National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801 (United States); Nemergut, Daniel [Operations and Engineering Division, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  10. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    International Nuclear Information System (INIS)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Nemergut, Daniel

    2017-01-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  11. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    International Nuclear Information System (INIS)

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  12. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.

    2001-01-01

    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained....... The paper describes the code format, the stochastic models and the resulting optimised partial safety factors....

  13. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    Science.gov (United States)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  14. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  15. Mercure IV code application to the external dose computation from low and medium level wastes

    International Nuclear Information System (INIS)

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  16. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  17. Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Yu, Dong Han; Kim, Seung Hwan.

    1997-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated through wind tunnel experiment. These results will give a physical insight in the development of a new dispersion model. Because there are some discrepancies between the results from Gaussian plume model and those from field test, the effect of terrain on the atmospheric dispersion was investigated by using CTDMPLUS code. Through this study we find that the model which can treat terrain effect is essential in the atmospheric dispersion of radioactive materials and the CTDMPLUS model can be used as a useful tool. And it is suggested that modification of a model and experimental study should be made through the continuous effort. The health effect assessment near the Yonggwang site by using IPE (Individual plant examination) results and its site data was performed. The health effect assessment is an important part of consequence analysis of a nuclear power plant site. The MACCS was used in the assessment. Based on the calculation of CCDF for each risk measure, it is shown that CCDF has a slow slope and thus wide probability distribution in cases of early fatality, early injury, total early fatality risk, and total weighted early fatality risk. And in cases of cancer fatality and population dose within 48km and 80km, the CCDF curve have a steep slope and thus narrow probability distribution. The establishment of methodologies for necessary models for consequence analysis resulting form a server accident in the nuclear power plant was made and a program for consequence analysis was developed. The models include atmospheric transport and diffusion, calculation of exposure doses for various pathways, and assessment of health effects and associated risks. Finally, the economic impact resulting form an accident in a nuclear power plant was investigated. In this study, estimation models for each cost terms that considered in economic

  18. Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Yu, Dong Han; Kim, Seung Hwan

    1997-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated through wind tunnel experiment. These results will give a physical insight in the development of a new dispersion model. Because there are some discrepancies between the results from Gaussian plume model and those from field test, the effect of terrain on the atmospheric dispersion was investigated by using CTDMPLUS code. Through this study we find that the model which can treat terrain effect is essential in the atmospheric dispersion of radioactive materials and the CTDMPLUS model can be used as a useful tool. And it is suggested that modification of a model and experimental study should be made through the continuous effort. The health effect assessment near the Yonggwang site by using IPE (Individual plant examination) results and its site data was performed. The health effect assessment is an important part of consequence analysis of a nuclear power plant site. The MACCS was used in the assessment. Based on the calculation of CCDF for each risk measure, it is shown that CCDF has a slow slope and thus wide probability distribution in cases of early fatality, early injury, total early fatality risk, and total weighted early fatality risk. And in cases of cancer fatality and population dose within 48km and 80km, the CCDF curve have a steep slope and thus narrow probability distribution. The establishment of methodologies for necessary models for consequence analysis resulting form a server accident in the nuclear power plant was made and a program for consequence analysis was developed. The models include atmospheric transport and diffusion, calculation of exposure doses for various pathways, and assessment of health effects and associated risks. Finally, the economic impact resulting form an accident in a nuclear power plant was investigated. In this study, estimation models for each cost terms that considered in economic

  19. Multi-level governance and adaptive capacity in West Africa

    Directory of Open Access Journals (Sweden)

    Maria Brockhaus

    2012-08-01

    Full Text Available In most regions in West Africa, livelihoods depend heavily on forest ecosystem goods and services, often in interplay with agricultural and livestock production systems. Numerous drivers of change are creating a range of fundamental economic, ecological, social and political challenges for the governance of forest commons. Climate change and its impacts on countries’ and regions’ development add a new dimension to an already challenging situation. Governance systems are challenged to set a frame for formulating, financing and implementing adaptation strategies at multiple layers, often in a context of ongoing institutional changes such as decentralisation. A deeper understanding of actors, institutions and networks is needed to overcome barriers in socio-ecological systems to adaptation and enable or enhance adaptive capacity. In this paper, we explore the relationship between governance and adaptive capacity, and characterise and assess the effects of a set of variables and indicators related to two core variables: Institutional flexibility, and individual and organisational understandings and perceptions. We present a comparative analysis with multiple methods based on a number of case studies undertaken at different levels in Burkina Faso and Mali. One of the key findings indicates the importance and influence of discourses and narratives, and how they affect adaptive capacity at different levels. Revealing the ideological character of discourses can help to enable adaptive capacity, as it would break the influence of the actors that employ these narratives to pursuit their own interests.

  20. Adapting Canada's northern infrastructure to climate change: the role of codes and standards

    International Nuclear Information System (INIS)

    Steenhof, P.

    2009-01-01

    This report provides the results of a research project that investigated the use of codes and standards in terms of their potential for fostering adaptation to the future impacts of climate change on built infrastructure in Canada's north. This involved a literature review, undertaking key informant interviews, and a workshop where key stakeholders came together to dialogue on the challenges facing built infrastructure in the north as a result of climate change and the role of codes and standards to help mitigate climate change risk. In this article, attention is given to the topic area of climate data and information requirements related to climate and climate change. This was an important focal area that was identified through this broader research effort since adequate data is essential in allowing codes and standards to meet their ultimate policy objective. A number of priorities have been identified specific to data and information needs in the context of the research topic investigated: There is a need to include northerners in developing the climate and permafrost data required for codes and standards so that these reflect the unique geographical, economic, and cultural realities and variability of the north; Efforts should be undertaken to realign climate design values so that they reflect both present and future risks; There is a need for better information on the rate and extent of permafrost degradation in the north; and, There is a need to improve monitoring of the rate of climate change in the Arctic. (author)

  1. Cooperative and Adaptive Network Coding for Gradient Based Routing in Wireless Sensor Networks with Multiple Sinks

    Directory of Open Access Journals (Sweden)

    M. E. Migabo

    2017-01-01

    Full Text Available Despite its low computational cost, the Gradient Based Routing (GBR broadcast of interest messages in Wireless Sensor Networks (WSNs causes significant packets duplications and unnecessary packets transmissions. This results in energy wastage, traffic load imbalance, high network traffic, and low throughput. Thanks to the emergence of fast and powerful processors, the development of efficient network coding strategies is expected to enable efficient packets aggregations and reduce packets retransmissions. For multiple sinks WSNs, the challenge consists of efficiently selecting a suitable network coding scheme. This article proposes a Cooperative and Adaptive Network Coding for GBR (CoAdNC-GBR technique which considers the network density as dynamically defined by the average number of neighbouring nodes, to efficiently aggregate interest messages. The aggregation is performed by means of linear combinations of random coefficients of a finite Galois Field of variable size GF(2S at each node and the decoding is performed by means of Gaussian elimination. The obtained results reveal that, by exploiting the cooperation of the multiple sinks, the CoAdNC-GBR not only improves the transmission reliability of links and lowers the number of transmissions and the propagation latency, but also enhances the energy efficiency of the network when compared to the GBR-network coding (GBR-NC techniques.

  2. Adaptation of OCA-P, a probabilistic fracture-mechanics code, to a personal computer

    International Nuclear Information System (INIS)

    Ball, D.G.; Cheverton, R.D.

    1985-01-01

    The OCA-P probabilistic fracture-mechanics code can now be executed on a personal computer with 512 kilobytes of memory, a math coprocessor, and a hard disk. A user's guide for the particular adaptation has been prepared, and additional importance sampling techniques for OCA-P have been developed that allow the sampling of only the tails of selected distributions. Features have also been added to OCA-P that permit RTNDT to be used as an ''independent'' variable in the calculation of P

  3. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  4. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  5. Quadrature amplitude modulation from basics to adaptive trellis-coded turbo-equalised and space-time coded OFDM CDMA and MC-CDMA systems

    CERN Document Server

    Hanzo, Lajos

    2004-01-01

    "Now fully revised and updated, with more than 300 pages of new material, this new edition presents the wide range of recent developments in the field and places particular emphasis on the family of coded modulation aided OFDM and CDMA schemes. In addition, it also includes a fully revised chapter on adaptive modulation and a new chapter characterizing the design trade-offs of adaptive modulation and space-time coding." "In summary, this volume amalgamates a comprehensive textbook with a deep research monograph on the topic of QAM, ensuring it has a wide-ranging appeal for both senior undergraduate and postgraduate students as well as practicing engineers and researchers."--Jacket.

  6. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  7. Genetic adaptability of durum wheat to salinity level at germination ...

    African Journals Online (AJOL)

    Administrator

    2011-05-23

    May 23, 2011 ... Keys words: Durum wheat, genetic-adaptability, salinity level. ... tolerance of crop proves the first way to overcome the limitation of crops ... Analysis of variance using GLM procedures (SAS, 1990) were used ... Additive, dominance and environmental variance components were ..... Breeding for stability of.

  8. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  9. Hyper-heuristics with low level parameter adaptation.

    Science.gov (United States)

    Ren, Zhilei; Jiang, He; Xuan, Jifeng; Luo, Zhongxuan

    2012-01-01

    Recent years have witnessed the great success of hyper-heuristics applying to numerous real-world applications. Hyper-heuristics raise the generality of search methodologies by manipulating a set of low level heuristics (LLHs) to solve problems, and aim to automate the algorithm design process. However, those LLHs are usually parameterized, which may contradict the domain independent motivation of hyper-heuristics. In this paper, we show how to automatically maintain low level parameters (LLPs) using a hyper-heuristic with LLP adaptation (AD-HH), and exemplify the feasibility of AD-HH by adaptively maintaining the LLPs for two hyper-heuristic models. Furthermore, aiming at tackling the search space expansion due to the LLP adaptation, we apply a heuristic space reduction (SAR) mechanism to improve the AD-HH framework. The integration of the LLP adaptation and the SAR mechanism is able to explore the heuristic space more effectively and efficiently. To evaluate the performance of the proposed algorithms, we choose the p-median problem as a case study. The empirical results show that with the adaptation of the LLPs and the SAR mechanism, the proposed algorithms are able to achieve competitive results over the three heterogeneous classes of benchmark instances.

  10. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  11. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  12. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths

    Science.gov (United States)

    Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.

    2018-04-01

    We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.

  13. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  14. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Directory of Open Access Journals (Sweden)

    Laureline Logiaco

    2015-08-01

    Full Text Available The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  15. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    Science.gov (United States)

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.

  16. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.

    Science.gov (United States)

    Chien, Tsair-Wei; Lin, Weir-Sen

    2016-03-02

    The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.

  17. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks.

    Science.gov (United States)

    Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian

    2018-05-10

    Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed

  18. An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shidi Yu

    2018-05-01

    Full Text Available Due to the Software Defined Network (SDN technology, Wireless Sensor Networks (WSNs are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1 with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2 As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3 The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that

  19. A study on climatic adaptation of dipteran mitochondrial protein coding genes

    Directory of Open Access Journals (Sweden)

    Debajyoti Kabiraj

    2017-10-01

    Full Text Available Diptera, the true flies are frequently found in nature and their habitat is found all over the world including Antarctica and Polar Regions. The number of documented species for order diptera is quite high and thought to be 14% of the total animal present in the earth [1]. Most of the study in diptera has focused on the taxa of economic and medical importance, such as the fruit flies Ceratitis capitata and Bactrocera spp. (Tephritidae, which are serious agricultural pests; the blowflies (Calliphoridae and oestrid flies (Oestridae, which can cause myiasis; the anopheles mosquitoes (Culicidae, are the vectors of malaria; and leaf-miners (Agromyzidae, vegetable and horticultural pests [2]. Insect mitochondrion consists of 13 protein coding genes, 22 tRNAs and 2 rRNAs, are the remnant portion of alpha-proteobacteria is responsible for simultaneous function of energy production and thermoregulation of the cell through the bi-genomic system thus different adaptability in different climatic condition might have compensated by complementary changes is the both genomes [3,4]. In this study we have collected complete mitochondrial genome and occurrence data of one hundred thirteen such dipteran insects from different databases and literature survey. Our understanding of the genetic basis of climatic adaptation in diptera is limited to the basic information on the occurrence location of those species and mito genetic factors underlying changes in conspicuous phenotypes. To examine this hypothesis, we have taken an approach of Nucleotide substitution analysis for 13 protein coding genes of mitochondrial DNA individually and combined by different software for monophyletic group as well as paraphyletic group of dipteran species. Moreover, we have also calculated codon adaptation index for all dipteran mitochondrial protein coding genes. Following this work, we have classified our sample organisms according to their location data from GBIF (https

  20. Uplink capacity of multi-class IEEE 802.16j relay networks with adaptive modulation and coding

    DEFF Research Database (Denmark)

    Wang, Hua; Xiong, C; Iversen, Villy Bæk

    2009-01-01

    The emerging IEEE 802.16j mobile multi-hop relay (MMR) network is currently being developed to increase the user throughput and extend the service coverage as an enhancement of existing 802.16e standard. In 802.16j, the intermediate relay stations (RSs) help the base station (BS) communicate...... with those mobile stations (MSs) that are either too far away from the BS or placed in an area where direct communication with BS experiences unsatisfactory level of service. In this paper, we investigate the uplink Erlang capacity of a two-hop 802.16j relay system supporting both voice and data traffics...... with adaptive modulation and coding (AMC) scheme applied in the physical layer. We first develop analytical models to calculate the blocking probability in the access zone and the outage probability in the relay zone, respectively. Then a joint algorithm is proposed to determine the bandwidth distribution...

  1. Understanding extreme sea levels for coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Hinkel, J.; Dangendorf, S.; Slangen, A.

    2016-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels, because increasing damage due to extreme events, such as storm surges and tropical cyclones, is one of the major consequences of sea level rise and climate change. In fact, the IPCC has highlighted in its AR4 report that "societal impacts of sea level change primarily occur via the extreme levels rather than as a direct consequence of mean sea level changes". Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future mean sea level; different scenarios were developed with process-based or semi-empirical models and used for coastal impact assessments at various spatial scales to guide coastal management and adaptation efforts. The uncertainties in future sea level rise are typically accounted for by analyzing the impacts associated with a range of scenarios leading to a vertical displacement of the distribution of extreme sea-levels. And indeed most regional and global studies find little or no evidence for changes in storminess with climate change, although there is still low confidence in the results. However, and much more importantly, there is still a limited understanding of present-day extreme sea-levels which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of extreme sea-levels. The bias of these models varies spatially and can reach values much larger than the expected sea level rise; but it can be accounted for in most regions making use of in-situ measurements; (2) Statistical models used for determining present-day extreme sea-level exceedance probabilities. There is no universally accepted approach to obtain such values for flood risk assessments and while substantial research has explored inter-model uncertainties for mean sea level, we explore here, for the first time, inter

  2. The management-retrieval code of nuclear level density sub-library (CENPL-NLD)

    International Nuclear Information System (INIS)

    Ge Zhigang; Su Zongdi; Huang Zhongfu; Dong Liaoyuan

    1995-01-01

    The management-retrieval code of the Nuclear Level Density (NLD) is presented. It contains two retrieval ways: single nucleus (SN) and neutron reaction (NR). The latter contains four kinds of retrieval types. This code not only can retrieve level density parameter and the data related to the level density, but also can calculate the relevant data by using different level density parameters and do comparison of the calculated results with related data in order to help user to select level density parameters

  3. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Science.gov (United States)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  4. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  5. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  6. The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, N.; Adger, W.N.; Kelly, P.M. [University of East Anglia, Norwich (United Kingdom). School of Environmental Sciences

    2005-07-01

    We present a set of indicators of vulnerability and capacity to adapt to climate variability, and by extension climate change, derived using a novel empirical analysis of data aggregated at the national level on a decadal timescale. The analysis is based on a conceptual framework in which risk is viewed in terms of outcome, and is a function of physically defined climate hazards and socially constructed vulnerability. Climate outcomes are represented by mortality from climate-related disasters, using the emergency events database data set, statistical relationships between mortality and a shortlist of potential proxies for vulnerability are used to identify key vulnerability indicators. We find that 11 key indicators exhibit a strong relationship with decadally aggregated mortality associated with climate-related disasters. Validation of indicators, relationships between vulnerability and adaptive capacity, and the sensitivity of subsequent vulnerability assessments to different sets of weightings are explored using expert judgement data, collected through a focus group exercise. The data are used to provide a robust assessment of vulnerability to climate-related mortality at the national level, and represent an entry point to more detailed explorations of vulnerability and adaptive capacity. They indicate that the most vulnerable nations are those situated in sub-Saharan Africa and those that have recently experienced conflict. Adaptive capacity - one element of vulnerability - is associated predominantly with governance, civil and political rights, and literacy. (author)

  7. The determinants of vulnerability and adaptive capacity at the national level and the implications for adaptation

    International Nuclear Information System (INIS)

    Brooks, N.; Adger, W.N.; Kelly, P.M.

    2005-01-01

    We present a set of indicators of vulnerability and capacity to adapt to climate variability, and by extension climate change, derived using a novel empirical analysis of data aggregated at the national level on a decadal timescale. The analysis is based on a conceptual framework in which risk is viewed in terms of outcome, and is a function of physically defined climate hazards and socially constructed vulnerability. Climate outcomes are represented by mortality from climate-related disasters, using the emergency events database data set, statistical relationships between mortality and a shortlist of potential proxies for vulnerability are used to identify key vulnerability indicators. We find that 11 key indicators exhibit a strong relationship with decadally aggregated mortality associated with climate-related disasters. Validation of indicators, relationships between vulnerability and adaptive capacity, and the sensitivity of subsequent vulnerability assessments to different sets of weightings are explored using expert judgement data, collected through a focus group exercise. The data are used to provide a robust assessment of vulnerability to climate-related mortality at the national level, and represent an entry point to more detailed explorations of vulnerability and adaptive capacity. They indicate that the most vulnerable nations are those situated in sub-Saharan Africa and those that have recently experienced conflict. Adaptive capacity - one element of vulnerability - is associated predominantly with governance, civil and political rights, and literacy. (author)

  8. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  9. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    Energy Technology Data Exchange (ETDEWEB)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794-3800 (United States); Almgren, Ann S.; Zhang, Weiqun [Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  10. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  11. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  12. Enhancement of combined heat and power economic dispatch using self adaptive real-coded genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraj, P. [Kalasalingam University, Srivilliputhur, Tamilnadu 626 190 (India); Rengaraj, R. [Electrical and Electronics Engineering, S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India); Salivahanan, S. [S.S.N. College of Engineering, Old Mahabalipuram Road, Thirupporur (T.K), Kalavakkam, Kancheepuram (Dist.) 603 110, Tamilnadu (India)

    2009-06-15

    In this paper, a self adaptive real-coded genetic algorithm (SARGA) is implemented to solve the combined heat and power economic dispatch (CHPED) problem. The self adaptation is achieved by means of tournament selection along with simulated binary crossover (SBX). The selection process has a powerful exploration capability by creating tournaments between two solutions. The better solution is chosen and placed in the mating pool leading to better convergence and reduced computational burden. The SARGA integrates penalty parameterless constraint handling strategy and simultaneously handles equality and inequality constraints. The population diversity is introduced by making use of distribution index in SBX operator to create a better offspring. This leads to a high diversity in population which can increase the probability towards the global optimum and prevent premature convergence. The SARGA is applied to solve CHPED problem with bounded feasible operating region which has large number of local minima. The numerical results demonstrate that the proposed method can find a solution towards the global optimum and compares favourably with other recent methods in terms of solution quality, handling constraints and computation time. (author)

  13. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    Science.gov (United States)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  14. Approach to evaluating health level and adaptation possibilities in schoolchildren

    Directory of Open Access Journals (Sweden)

    O.V. Andrieieva

    2014-02-01

    Full Text Available Purpose: substantiate the results of theoretical and practical investigations aimed at improving the health of students. Material: the study involved 187 children including 103 boys and 84 girls aged 7-10 years. Results: through a rapid assessment of physical health it was found that pupils of primary school age have an average level of the functional state of the organism, with a minimum resistance to risk factors (chronic non-infective diseases, etc.. For the first time, a technique for determining the level of adaptation and reserve capacity of school students proposed by Ukrainian hygienists was used in physical culture and sports practice. Conclusions: the technique reveals strain in adaptation mechanisms that corresponds to donozological condition. An idea is proposed that Nordic walking, through the positive impact on the body of aerobic mode of energy supply, is able to increase the reserve-adaptive capabilities of primary school students by improvement of their health as well as to solve the problems of health formation and health care in the physical education of youth.

  15. Radiation transport code with adaptive Mesh Refinement: acceleration techniques and applications

    International Nuclear Information System (INIS)

    Velarde, Pedro; Garcia-Fernaandez, Carlos; Portillo, David; Barbas, Alfonso

    2011-01-01

    We present a study of acceleration techniques for solving Sn radiation transport equations with Adaptive Mesh Refinement (AMR). Both DSA and TSA are considered, taking into account the influence of the interaction between different levels of the mesh structure and the order of approximation in angle. A Hybrid method is proposed in order to obtain better convergence rate and lower computer times. Some examples are presented relevant to ICF and X ray secondary sources. (author)

  16. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  17. PRESTO low-level waste transport and risk assessment code

    International Nuclear Information System (INIS)

    Little, C.A.; Fields, D.E.; McDowell-Boyer, L.M.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code developed under US Environmental Protection Agency (EPA) funding to evaluate possible health effects from shallow land burial trenches. The model is intended to be generic and to assess radionuclide transport, ensuing exposure, and health impact to a static local population for a 1000-y period following the end of burial operations. Human exposure scenarios considered by the model include normal releases (including leaching and operational spillage), human intrusion, and site farming or reclamation. Pathways and processes of transit from the trench to an individual or population inlude: groundwater transport, overland flow, erosion, surface water dilution, resuspension, atmospheric transport, deposition, inhalation, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses are calculated as well as doses to the intruder and farmer. Cumulative health effects in terms of deaths from cancer are calculated for the population over the thousand-year period using a life-table approach. Data bases are being developed for three extant shallow land burial sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York

  18. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  19. Coding of level of ambiguity within neural systems mediating choice.

    Science.gov (United States)

    Lopez-Paniagua, Dan; Seger, Carol A

    2013-01-01

    Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common "fronto-parietal-striatal" network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum).

  20. Adaptation.

    Science.gov (United States)

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  1. Impact of the Level of State Tax Code Progressivity on Children's Health Outcomes

    Science.gov (United States)

    Granruth, Laura Brierton; Shields, Joseph J.

    2011-01-01

    This research study examines the impact of the level of state tax code progressivity on selected children's health outcomes. Specifically, it examines the degree to which a state's tax code ranking along the progressive-regressive continuum relates to percentage of low birthweight babies, infant and child mortality rates, and percentage of…

  2. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  3. Adaptive colour contrast coding in the salamander retina efficiently matches natural scene statistics.

    Directory of Open Access Journals (Sweden)

    Genadiy Vasserman

    Full Text Available The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level.

  4. Adaptive Iterative Soft-Input Soft-Output Parallel Decision-Feedback Detectors for Asynchronous Coded DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Zhang Wei

    2005-01-01

    Full Text Available The optimum and many suboptimum iterative soft-input soft-output (SISO multiuser detectors require a priori information about the multiuser system, such as the users' transmitted signature waveforms, relative delays, as well as the channel impulse response. In this paper, we employ adaptive algorithms in the SISO multiuser detector in order to avoid the need for this a priori information. First, we derive the optimum SISO parallel decision-feedback detector for asynchronous coded DS-CDMA systems. Then, we propose two adaptive versions of this SISO detector, which are based on the normalized least mean square (NLMS and recursive least squares (RLS algorithms. Our SISO adaptive detectors effectively exploit the a priori information of coded symbols, whose soft inputs are obtained from a bank of single-user decoders. Furthermore, we consider how to select practical finite feedforward and feedback filter lengths to obtain a good tradeoff between the performance and computational complexity of the receiver.

  5. Adapting to Rising Sea Level: A Florida Perspective

    Science.gov (United States)

    Parkinson, Randall W.

    2009-07-01

    Global climate change and concomitant rising sea level will have a profound impact on Florida's coastal and marine systems. Sea-level rise will increase erosion of beaches, cause saltwater intrusion into water supplies, inundate coastal marshes and other important habitats, and make coastal property more vulnerable to erosion and flooding. Yet most coastal areas are currently managed under the premise that sea-level rise is not significant and the shorelines are static or can be fixed in place by engineering structures. The new reality of sea-level rise and extreme weather due to climate change requires a new style of planning and management to protect resources and reduce risk to humans. Scientists must: (1) assess existing coastal vulnerability to address short term management issues and (2) model future landscape change and develop sustainable plans to address long term planning and management issues. Furthermore, this information must be effectively transferred to planners, managers, and elected officials to ensure their decisions are based upon the best available information. While there is still some uncertainty regarding the details of rising sea level and climate change, development decisions are being made today which commit public and private investment in real estate and associated infrastructure. With a design life of 30 yrs to 75 yrs or more, many of these investments are on a collision course with rising sea level and the resulting impacts will be significant. In the near term, the utilization of engineering structures may be required, but these are not sustainable and must ultimately yield to "managed withdrawal" programs if higher sea-level elevations or rates of rise are forthcoming. As an initial step towards successful adaptation, coastal management and planning documents (i.e., comprehensive plans) must be revised to include reference to climate change and rising sea-level.

  6. Integrating conservation costs into sea level rise adaptive conservation prioritization

    Directory of Open Access Journals (Sweden)

    Mingjian Zhu

    2015-07-01

    Full Text Available Biodiversity conservation requires strategic investment as resources for conservation are often limited. As sea level rises, it is important and necessary to consider both sea level rise and costs in conservation decision making. In this study, we consider costs of conservation in an integrated modeling process that incorporates a geomorphological model (SLAMM, species habitat models, and conservation prioritization (Zonation to identify conservation priorities in the face of landscape dynamics due to sea level rise in the Matanzas River basin of northeast Florida. Compared to conservation priorities that do not consider land costs in the analysis process, conservation priorities that consider costs in the planning process change significantly. The comparison demonstrates that some areas with high conservation values might be identified as lower priorities when integrating economic costs in the planning process and some areas with low conservation values might be identified as high priorities when considering costs in the planning process. This research could help coastal resources managers make informed decisions about where and how to allocate conservation resources more wisely to facilitate biodiversity adaptation to sea level rise.

  7. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    Science.gov (United States)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer

  8. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV

    Directory of Open Access Journals (Sweden)

    Y. Nasser

    2008-01-01

    Full Text Available Single-frequency networks (SFNs for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR mapping (EESM to predict the bit error rate (BER at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.

  9. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  10. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    International Nuclear Information System (INIS)

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites

  11. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  12. CESARR V.2 manual: Computer code for the evaluation of surface storage of low and medium level radioactive waste

    International Nuclear Information System (INIS)

    Moya Rivera, J.A.; Bolado Lavin, R.

    1997-01-01

    CESARR (Code for the safety evaluation of low and medium level radioactive waste storage). This code was developed for the safety probabilistic evaluations in the facilities of low-and medium level radioactive waste storage

  13. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  14. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  15. Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model

    NARCIS (Netherlands)

    S.M. Bohte (Sander)

    2012-01-01

    htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural

  16. Fifty years of illumination about the natural levels of adaptation

    DEFF Research Database (Denmark)

    Boomsma, Jacobus Jan

    2016-01-01

    A visionary Darwinian ahead of his time, George C. Williams developed in his 1966 book Adaptation and Natural Selection the essentials of a unifying theory of adaptation that remains robust today and has inspired immense progress in understanding how natural selection works.......A visionary Darwinian ahead of his time, George C. Williams developed in his 1966 book Adaptation and Natural Selection the essentials of a unifying theory of adaptation that remains robust today and has inspired immense progress in understanding how natural selection works....

  17. Adaptation to the Impacts of Sea Level Rise in Egypt

    International Nuclear Information System (INIS)

    El-Raey, M.; Dewidar, K.R.; El-Hattab, M.

    1999-01-01

    Assessment of the vulnerability and expected socioeconomic losses over the Nile delta coast due to the impact of sea level rise is carried out in details. Impacts of sea level rise over the Governorates of Alexandria and Port Said in particular, are evaluated quantitatively. Analysis of the results at Alexandria Governorate indicate that, if no action is taken, an area of about 30% of the city will be lost due to inundation. Almost 2 million people will have to abandon their homeland; 195,000 jobs will be lost and an economic loss of over $3.5 Billion is expected over the next century. At Port Said Governorate results indicate that beach areas are most severely affected (hence tourism), followed by urban areas. The agriculture sector is the least affected sector. It is estimated that the economic loss is over $ 2.0 Billion for 0.50 m SLR and may exceed $ 4.4 Billion for 1.25 m SLR. Options and costs of adaptation are analyzed and presented. Multi-criteria and decision matrix approaches, based on questionnaire surveys are carried out to identify priorities for the two cases. Analysis of these techniques of two options; the current policy (hard protection measures on some vulnerable areas) and no action (stopping these activities) have the lowest scores. Beach nourishment and integrated coastal zone management (ICZM) have the highest scores, however ICZM has high cost measures. The most cost effective option is the land-use change, however with relatively very high cost measure. It is recommended that an ICZM approach be adopted since it provides a reasonable trade off between costs and cost effectiveness. 14 refs

  18. Adaptive Reference Levels in a Level-Crossing Analog-to-Digital Converter

    Directory of Open Access Journals (Sweden)

    Andrew C. Singer

    2008-11-01

    Full Text Available Level-crossing analog-to-digital converters (LC ADCs have been considered in the literature and have been shown to efficiently sample certain classes of signals. One important aspect of their implementation is the placement of reference levels in the converter. The levels need to be appropriately located within the input dynamic range, in order to obtain samples efficiently. In this paper, we study optimization of the performance of such an LC ADC by providing several sequential algorithms that adaptively update the ADC reference levels. The accompanying performance analysis and simulation results show that as the signal length grows, the performance of the sequential algorithms asymptotically approaches that of the best choice that could only have been chosen in hindsight within a family of possible schemes.

  19. Adapt

    Science.gov (United States)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  20. GRABGAM: A Gamma Analysis Code for Ultra-Low-Level HPGe SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Winn, W.G.

    1999-07-28

    The GRABGAM code has been developed for analysis of ultra-low-level HPGe gamma spectra. The code employs three different size filters for the peak search, where the largest filter provides best sensitivity for identifying low-level peaks and the smallest filter has the best resolution for distinguishing peaks within a multiplet. GRABGAM basically generates an integral probability F-function for each singlet or multiplet peak analysis, bypassing the usual peak fitting analysis for a differential f-function probability model. Because F is defined by the peak data, statistical limitations for peak fitting are avoided; however, the F-function does provide generic values for peak centroid, full width at half maximum, and tail that are consistent with a Gaussian formalism. GRABGAM has successfully analyzed over 10,000 customer samples, and it interfaces with a variety of supplementary codes for deriving detector efficiencies, backgrounds, and quality checks.

  1. Psacoin level 1A intercomparison probabilistic system assessment code (PSAC) user group

    International Nuclear Information System (INIS)

    Nies, A.; Laurens, J.M.; Galson, D.A.; Webster, S.

    1990-01-01

    This report describes an international code intercomparison exercise conducted by the NEA Probabilistic System Assessment Code (PSAC) User Group. The PSACOIN Level 1A exercise is the third of a series designed to contribute to the verification of probabilistic codes that may be used in assessing the safety of radioactive waste disposal systems or concepts. Level 1A is based on a more realistic system model than that used in the two previous exercises, and involves deep geological disposal concepts with a relatively complex structure of the repository vault. The report compares results and draws conclusions with regard to the use of different modelling approaches and the possible importance to safety of various processes within and around a deep geological repository. In particular, the relative significance of model uncertainty and data variability is discussed

  2. Anti-voice adaptation suggests prototype-based coding of voice identity

    Directory of Open Access Journals (Sweden)

    Marianne eLatinus

    2011-07-01

    Full Text Available We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A. In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype.

  3. Adaptation and implementation of the TRACE code for transient analysis in designs lead cooled fast reactors

    International Nuclear Information System (INIS)

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2015-01-01

    Lead-Cooled Fast Reactor (LFR) has been identified as one of promising future reactor concepts in the technology road map of the Generation IVC International Forum (GIF)as well as in the Deployment Strategy of the European Sustainable Nuclear Industrial Initiative (ESNII), both aiming at improved sustainability, enhanced safety, economic competitiveness, and proliferation resistance. This new nuclear reactor concept requires the development of computational tools to be applied in design and safety assessments to confirm improved inherent and passive safety features of this design. One approach to this issue is to modify the current computational codes developed for the simulation of Light Water Reactors towards their applicability for the new designs. This paper reports on the performed modifications of the TRACE system code to make it applicable to LFR safety assessments. The capabilities of the modified code are demonstrated on series of benchmark exercises performed versus other safety analysis codes. (Author)

  4. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  5. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  6. Scalable Stream Coding for Adaptive Foveation Enhanced Percept Multimedia Information Communication for Interactive Medical Applications

    National Research Council Canada - National Science Library

    Khan, Javed

    2003-01-01

    .... The demonstrated systems include interactive perceptual transcoding where real-time eye-tracker data fuses with a passing stream, the active subnet diffusion coding-- where multiple active nodes...

  7. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-01-01

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC

  8. Effect of a care plan based on Roy adaptation model biological dimension on stroke patients' physiologic adaptation level.

    Science.gov (United States)

    Alimohammadi, Nasrollah; Maleki, Bibi; Shahriari, Mohsen; Chitsaz, Ahmad

    2015-01-01

    Stroke is a stressful event with several functional, physical, psychological, social, and economic problems that affect individuals' different living balances. With coping strategies, patients try to control these problems and return to their natural life. The aim of this study is to investigate the effect of a care plan based on Roy adaptation model biological dimension on stroke patients' physiologic adaptation level. This study is a clinical trial in which 50 patients, affected by brain stroke and being admitted in the neurology ward of Kashani and Alzahra hospitals, were randomly assigned to control and study groups in Isfahan in 2013. Roy adaptation model care plan was administered in biological dimension in the form of four sessions and phone call follow-ups for 1 month. The forms related to Roy adaptation model were completed before and after intervention in the two groups. Chi-square test and t-test were used to analyze the data through SPSS 18. There was a significant difference in mean score of adaptation in physiological dimension in the study group after intervention (P adaptation in the patients affected by brain stroke in the study and control groups showed a significant increase in physiological dimension in the study group by 47.30 after intervention (P adaptation model biological dimension care plan can result in an increase in adaptation in patients with stroke in physiological dimension. Nurses can use this model for increasing patients' adaptation.

  9. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  10. A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level

    NARCIS (Netherlands)

    Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano

    2016-01-01

    Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that

  11. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  12. Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.

    Science.gov (United States)

    Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David

    2013-12-01

    Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.

  13. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong

    2011-11-25

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify the performance of two joint AMDC schemes in the presence of feedback error, in terms of the average spectral efficiency, the average number of combined paths, and the average bit error rate. The benefit of feedback error compensation with adaptive combining is also quantified. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. Copyright (c) 2011 John Wiley & Sons, Ltd.

  14. On the feedback error compensation for adaptive modulation and coding scheme

    KAUST Repository

    Choi, Seyeong; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we consider the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of perfect feedback channels. We quantify

  15. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  16. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  17. Recommendations for codes and standards to be used for design and fabrication of high level waste canister

    International Nuclear Information System (INIS)

    Bermingham, A.J.; Booker, R.J.; Booth, H.R.; Ruehle, W.G.; Shevekov, S.; Silvester, A.G.; Tagart, S.W.; Thomas, J.A.; West, R.G.

    1978-01-01

    This study identifies codes, standards, and regulatory requirements for developing design criteria for high-level waste (HLW) canisters for commercial operation. It has been determined that the canister should be designed as a pressure vessel without provision for any overpressure protection type devices. It is recommended that the HLW canister be designed and fabricated to the requirements of the ASME Section III Code, Division 1 rules, for Code Class 3 components. Identification of other applicable industry and regulatory guides and standards are provided in this report. Requirements for the Design Specification are found in the ASME Section III Code. It is recommended that design verification be conducted principally with prototype testing which will encompass normal and accident service conditions during all phases of the canister life. Adequacy of existing quality assurance and licensing standards for the canister was investigated. One of the recommendations derived from this study is a requirement that the canister be N stamped. In addition, acceptance standards for the HLW waste should be established and the waste qualified to those standards before the canister is sealed. A preliminary investigation of use of an overpack for the canister has been made, and it is concluded that the use of an overpack, as an integral part of overall canister design, is undesirable, both from a design and economics standpoint. However, use of shipping cask liners and overpack type containers at the Federal repository may make the canister and HLW management safer and more cost effective. There are several possible concepts for canister closure design. These concepts can be adapted to the canister with or without an overpack. A remote seal weld closure is considered to be one of the most suitable closure methods; however, mechanical seals should also be investigated

  18. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    International Nuclear Information System (INIS)

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)

  19. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  20. Code bench-marking for long-term tracking and adaptive algorithms

    OpenAIRE

    Schmidt, Frank; Alexahin, Yuri; Amundson, James; Bartosik, Hannes; Franchetti, Giuliano; Holmes, Jeffrey; Huschauer, Alexander; Kapin, Valery; Oeftiger, Adrian; Stern, Eric; Titze, Malte

    2016-01-01

    At CERN we have ramped up a program to investigate space charge effects in the LHC pre-injectors with high brightness beams and long storage times. This in view of the LIU upgrade project for these accelerators. These studies require massive simulation over large number of turns. To this end we have been looking at all available codes and started collaborations on code development with several laboratories: pyORBIT from SNS, SYNERGIA from Fermilab, MICROMAP from GSI and our in-house MAD-X cod...

  1. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    Science.gov (United States)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  2. Adapting to rates versus amounts of climate change: a case of adaptation to sea-level rise

    Science.gov (United States)

    Shayegh, Soheil; Moreno-Cruz, Juan; Caldeira, Ken

    2016-10-01

    Adaptation is the process of adjusting to climate change in order to moderate harm or exploit beneficial opportunities associated with it. Most adaptation strategies are designed to adjust to a new climate state. However, despite our best efforts to curtail greenhouse gas emissions, climate is likely to continue changing far into the future. Here, we show how considering rates of change affects the projected optimal adaptation strategy. We ground our discussion with an example of optimal investment in the face of continued sea-level rise, presenting a quantitative model that illustrates the interplay among physical and economic factors governing coastal development decisions such as rate of sea-level rise, land slope, discount rate, and depreciation rate. This model shows that the determination of optimal investment strategies depends on taking into account future rates of sea-level rise, as well as social and political constraints. This general approach also applies to the development of improved strategies to adapt to ongoing trends in temperature, precipitation, and other climate variables. Adaptation to some amount of change instead of adaptation to ongoing rates of change may produce inaccurate estimates of damages to the social systems and their ability to respond to external pressures.

  3. Computerized coding system for life narratives to assess students' personality adaption

    NARCIS (Netherlands)

    He, Q.; Veldkamp, B.P.; Westerhof, G.J.; Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John

    2011-01-01

    The present study is a trial in developing an automatic computerized coding framework with text mining techniques to identify the characteristics of redemption and contamination in life narratives written by undergraduate students. In the initial stage of text classification, the keyword-based

  4. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  5. Micro-Level Adaptation, Macro-Level Selection, and the Dynamics of Market Partitioning.

    Science.gov (United States)

    García-Díaz, César; van Witteloostuijn, Arjen; Péli, Gábor

    2015-01-01

    This paper provides a micro-foundation for dual market structure formation through partitioning processes in marketplaces by developing a computational model of interacting economic agents. We propose an agent-based modeling approach, where firms are adaptive and profit-seeking agents entering into and exiting from the market according to their (lack of) profitability. Our firms are characterized by large and small sunk costs, respectively. They locate their offerings along a unimodal demand distribution over a one-dimensional product variety, with the distribution peak constituting the center and the tails standing for the peripheries. We found that large firms may first advance toward the most abundant demand spot, the market center, and release peripheral positions as predicted by extant dual market explanations. However, we also observed that large firms may then move back toward the market fringes to reduce competitive niche overlap in the center, triggering nonlinear resource occupation behavior. Novel results indicate that resource release dynamics depend on firm-level adaptive capabilities, and that a minimum scale of production for low sunk cost firms is key to the formation of the dual structure.

  6. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author).

  7. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author)

  8. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  9. Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot

    Directory of Open Access Journals (Sweden)

    Goldstein J Scott

    2002-01-01

    Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.

  10. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    International Nuclear Information System (INIS)

    Takahashi, Tomoyuki; Takeda, Seiji; Kimura, Hideo

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  11. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    International Nuclear Information System (INIS)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on 'The improvement of level-1 PSA computer codes' is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author)

  12. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  13. Farmers’ Perceptions about Adaptation Practices to Climate Change and Barriers to Adaptation: A Micro-Level Study in Ghana

    Directory of Open Access Journals (Sweden)

    Francis Ndamani

    2015-08-01

    Full Text Available This study analyzed the farmer-perceived importance of adaptation practices to climate change and examined the barriers that impede adaptation. Perceptions about causes and effects of long-term changes in climatic variables were also investigated. A total of 100 farmer-households were randomly selected from four communities in the Lawra district of Ghana. Data was collected using semi-structured questionnaires and focus group discussions (FGDs. The results showed that 87% of respondents perceived a decrease in rainfall amount, while 82% perceived an increase in temperature over the past 10 years. The study revealed that adaptation was largely in response to dry spells and droughts (93.2% rather than floods. About 67% of respondents have adjusted their farming activities in response to climate change. Empirical results of the weighted average index analysis showed that farmers ranked improved crop varieties and irrigation as the most important adaptation measures. It also revealed that farmers lacked the capacity to implement the highly ranked adaptation practices. The problem confrontation index analysis showed that unpredictable weather, high cost of farm inputs, limited access to weather information, and lack of water resources were the most critical barriers to adaptation. This analysis of adaptation practices and constraints at farmer level will help facilitate government policy formulation and implementation.

  14. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  15. Adaptation or Resistance: a classification of responses to sea-level rise

    Science.gov (United States)

    Cooper, J. A.

    2016-02-01

    Societal responses to sea level rise and associated coastal change are apparently diverse in nature and motivation. Most are commonly referred to as 'adaptation'. Based on a review of current practice, however, it is argued that many of these responses do not involve adaptation, but are rather resisting change. There are several instances where formerly adaptive initiatives involving human adaptability are being replaced by initiatives that resist change. A classification is presented that recognises a continuum of responses ranging from adaptation to resistance, depending upon the willingness to change human activities to accommodate environmental change. In many cases climate change adaptation resources are being used for projects that are purely resistant and which foreclose future adaptation options. It is argued that a more concise definition of adaptation is needed if coastal management is to move beyond the current position of holding the shoreline, other tah n in a few showcase examples.

  16. Adaptation of Toodee-2 computer code for reflood analysis in Angra-1 reactor

    International Nuclear Information System (INIS)

    Praes, J.G.L.; Onusic Junior, J.

    1981-01-01

    A method of calculation the heat transfer coefficient used in Toodee-2 computer code for core reflood analysis in a loss of coolant accident, is presented. Preliminary results are presented with the use of heat transfer correlations based on FLECHT experiments adequate to a geometric arrangement such as 16 x 16 (Angra I). Optional calculations are suggested for the heat transfer coefficients when the cooling of fuel cladding by steam is used. (Author) [pt

  17. How to Track Adaptation to Climate Change: A Typology of Approaches for National-Level Application

    Directory of Open Access Journals (Sweden)

    James D. Ford

    2013-09-01

    Full Text Available The need to track climate change adaptation progress is being increasingly recognized but our ability to do the tracking is constrained by the complex nature of adaptation and the absence of measurable outcomes or indicators by which to judge if and how adaptation is occurring. We developed a typology of approaches by which climate change adaptation can be tracked globally at a national level. On the one hand, outcome-based approaches directly measure adaptation progress and effectiveness with reference to avoided climate change impacts. However, given that full exposure to climate change impacts will not happen for decades, alternative approaches focus on developing indicators or proxies by which adaptation can be monitored. These include systematic measures of adaptation readiness, processes undertaken to advance adaptation, policies and programs implemented to adapt, and measures of the impacts of these policies and programs on changing vulnerability. While these approaches employ various methods and data sources, and identify different components of adaptation progress to track at the national level, they all seek to characterize the current status of adaptation by which progress over time can be monitored. However, there are significant challenges to operationalizing these approaches, including an absence of systematically collected data on adaptation actions and outcomes, underlying difficulties of defining what constitutes "adaptation", and a disconnect between the timescale over which adaptation plays out and the practical need for evaluation to inform policy. Given the development of new adaptation funding streams, it is imperative that tools for monitoring progress are developed and validated for identifying trends and gaps in adaptation response.

  18. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    Science.gov (United States)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  19. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  20. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  1. Algorithms and data structures for massively parallel generic adaptive finite element codes

    KAUST Repository

    Bangerth, Wolfgang

    2011-12-01

    Today\\'s largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.

  2. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  3. An overview of the geochemical code MINTEQ: Applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Peterson, S.R.; Opitz, B.E.; Graham, M.J.; Eary, L.E.

    1987-03-01

    The MINTEQ geochemical computer code, developed at the Pacific Northwest Laboratory (PNL), integrates many of the capabilities of its two immediate predecessors, MINEQL and WATEQ3. The MINTEQ code will be used in the Special Waste Form Lysimeters-Arid program to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments of the interaction of ground water with solidified low-level wastes. The code can calculate ion speciation/solubilitya, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial, solidified low-level wastes. The wastes being evaluated include power-reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code was upgraded preparatory to performing the geochemical modeling. Thermodynamic data for solid phases and aqueous species containing Sb, Ce, Cs, or Co were added to the MINTEQ database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the waste forms predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partial field validation of the geochemical model

  4. Coastal Adaptation Planning for Sea Level Rise and Extremes: A Global Model for Adaptation Decision-making at the Local Level Given Uncertain Climate Projections

    Science.gov (United States)

    Turner, D.

    2014-12-01

    Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.

  5. Confidence level in the calculations of HCDA consequences using large codes

    International Nuclear Information System (INIS)

    Nguyen, D.H.; Wilburn, N.P.

    1979-01-01

    The probabilistic approach to nuclear reactor safety is playing an increasingly significant role. For the liquid-metal fast breeder reactor (LMFBR) in particular, the ultimate application of this approach could be to determine the probability of achieving the goal of a specific line-of-assurance (LOA). Meanwhile a more pressing problem is one of quantifying the uncertainty in a calculated consequence for hypothetical core disruptive accident (HCDA) using large codes. Such uncertainty arises from imperfect modeling of phenomenology and/or from inaccuracy in input data. A method is presented to determine the confidence level in consequences calculated by a large computer code due to the known uncertainties in input invariables. A particular application was made to the initial time of pin failure in a transient overpower HCDA calculated by the code MELT-IIIA in order to demonstrate the method. A probability distribution function (pdf) for the time of failure was first constructed, then the confidence level for predicting this failure parameter within a desired range was determined

  6. Anthropogenic sea level rise and adaptation in the Yangtze estuary

    Science.gov (United States)

    Cheng, H.; Chen, J.; Chen, Z.; Ruan, R.; Xu, G.; Zeng, G.; Zhu, J.; Dai, Z.; Gu, S.; Zhang, X.; Wang, H.

    2016-02-01

    Sea level rise is a major projected threat of climate change. There are regional variations in sea level changes, depending on both naturally the tectonic subsidence, geomorphology, naturally changing river inputs and anthropogenic driven forces as artificial reservoir water impoundment within the watershed and urban land subsidence driven by ground water depletion in the river delta. Little is known on regional sea level fall in response to the channel erosion due to the sediment discharge decline by reservoir interception in the upstream watershed, and water level rise driven by anthropogenic measures as the land reclamation, deep waterway regulation and fresh water reservoir construction to the sea level change in estuaries. Changing coastal cities are situated in the delta regions expected to be threatened in various degrees. Shanghai belongs to those cities. Here we show that the anthropogenic driven sea level rise in the Yangtze estuary from the point of view of the continuous hydrodynamic system consisted of river catchment, estuary and coastal sea. Land subsidence is cited as 4 mm/a (2011-2030). Scour depth of the estuarine channel by upstream engineering as Three Gauge Dam is estimated at 2-10 cm (2011-2030). The rise of water level by deep waterway and land reclamation is estimated at 8-10 cm (2011-2030). The relative sea level rise will be speculated about 10 -16 cm (2011-2030), which these anthropogenic sea level changes will be imposed into the absolute sea level rise 2 mm/a and tectonic subsidence 1 mm/a measured in 1990s. The action guideline to the sea level rise strategy in the Shanghai city have been proposed to the Shanghai government as (1) recent actions (2012-2015) to upgrade the city water supply and drainage engineering and protective engineering; (2) interim actions (2016-2020) to improve sea level monitoring and early warning system, and then the special, city, regional planning considering sea level rise; (3) long term actions (2021

  7. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  8. Economic levels of thermal resistance for house envelopes: Considerations for a national energy code

    International Nuclear Information System (INIS)

    Swinton, M.C.; Sander, D.M.

    1992-01-01

    A code for energy efficiency in new buildings is being developed by the Standing Committee on Energy Conservation in Buildings. The precursor to the new code used national average energy rates and construction costs to determine economic optimum levels of insulation, and it is believed that this resulted in prescription of sub-optimum insulation levels in any region of Canada where energy or construction costs differ significantly from the average. A new approach for determining optimum levels of thermal insulation is proposed. The analytic techniques use month-by-month energy balances of heat loss and gain; use gain load ratio correlation (GLR) for predicting the fraction of useable free heat; increase confidence in the savings predictions for above grade envelopes; can take into account solar effects on windows; and are compatible with below-grade heat loss analysis techniques in use. A sensitivity analysis was performed to determine whether reasonable variations in house characteristics would cause significant differences in savings predicted. The life cycle costing technique developed will allow the selection of thermal resistances that are commonly met by industry. Environmental energy cost multipliers can be used with the proposed methodology, which could have a minor role in encouraging the next higher level of energy efficiency. 11 refs., 6 figs., 2 tabs

  9. Adaptation of fuel code for light water reactor with austenitic steel rod cladding

    International Nuclear Information System (INIS)

    Gomes, Daniel de Souza; Silva, Antonio Teixeira; Giovedi, Claudia

    2015-01-01

    Light water reactors were used with steel as nuclear fuel cladding from 1960 to 1980. The high performance proved that the use of low-carbon alloys could substitute the current zirconium alloys. Stainless steel is an alternative that can be used as cladding. The zirconium alloys replaced the steel. However, significant experiences in-pile occurred, in commercial units such as Haddam Neck, Indian Point, and Yankee experiences. Stainless Steel Types 347 and 348 can be used as cladding. An advantage of using Stainless Steel was evident in Fukushima when a large number of hydrogens was produced at high temperatures. The steel cladding does not eliminate the problem of accumulating free hydrogen, which can lead to a risk of explosion. In a boiling water reactor, environments easily exist for the attack of intergranular corrosion. The Stainless Steel alloys, Types 321, 347, and 348, are stabilized against attack by the addition of titanium, niobium, or tantalum. The steel Type 348 is composed of niobium, tantalum, and cobalt. Titanium preserves type 321, and niobium additions stabilize type 347. In recent years, research has increased on studying the effects of irradiation by fast neutrons. The impact of radiation includes changes in flow rate limits, deformation, and ductility. The irradiation can convert crystalline lattices into an amorphous structure. New proposals are emerging that suggest using a silicon carbide-based fuel rod cladding or iron-chromium-aluminum alloys. These materials can substitute the classic zirconium alloys. Once the steel Type 348 was chosen, the thermal and mechanical properties were coded in a library of functions. The fuel performance codes contain all features. A comparative analysis of the steel and zirconium alloys was made. The results demonstrate that the austenitic steel alloys are the viable candidates for substituting the zirconium alloys. (author)

  10. Adaptation of fuel code for light water reactor with austenitic steel rod cladding

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Daniel de Souza; Silva, Antonio Teixeira, E-mail: dsgomes@ipen.br, E-mail: teixeira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Giovedi, Claudia, E-mail: claudia.giovedi@labrisco.usp.br [Universidade de Sao Paulo (POLI/USP), Sao Paulo, SP (Brazil). Lab. de Analise, Avaliacao e Gerenciamento de Risco

    2015-07-01

    Light water reactors were used with steel as nuclear fuel cladding from 1960 to 1980. The high performance proved that the use of low-carbon alloys could substitute the current zirconium alloys. Stainless steel is an alternative that can be used as cladding. The zirconium alloys replaced the steel. However, significant experiences in-pile occurred, in commercial units such as Haddam Neck, Indian Point, and Yankee experiences. Stainless Steel Types 347 and 348 can be used as cladding. An advantage of using Stainless Steel was evident in Fukushima when a large number of hydrogens was produced at high temperatures. The steel cladding does not eliminate the problem of accumulating free hydrogen, which can lead to a risk of explosion. In a boiling water reactor, environments easily exist for the attack of intergranular corrosion. The Stainless Steel alloys, Types 321, 347, and 348, are stabilized against attack by the addition of titanium, niobium, or tantalum. The steel Type 348 is composed of niobium, tantalum, and cobalt. Titanium preserves type 321, and niobium additions stabilize type 347. In recent years, research has increased on studying the effects of irradiation by fast neutrons. The impact of radiation includes changes in flow rate limits, deformation, and ductility. The irradiation can convert crystalline lattices into an amorphous structure. New proposals are emerging that suggest using a silicon carbide-based fuel rod cladding or iron-chromium-aluminum alloys. These materials can substitute the classic zirconium alloys. Once the steel Type 348 was chosen, the thermal and mechanical properties were coded in a library of functions. The fuel performance codes contain all features. A comparative analysis of the steel and zirconium alloys was made. The results demonstrate that the austenitic steel alloys are the viable candidates for substituting the zirconium alloys. (author)

  11. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  12. Implementation and adaption of the Computer Code ECOSYS/EXCEL for Austria as OECOSYS/EXCEL

    International Nuclear Information System (INIS)

    Hick, H.; Suda, M.; Mueck, K.

    1998-03-01

    During 1989, under contract to the Austrian Chamber of the Federal Chancellor, department VII, the radioecological forecast model OECOSYS was implemented by the Austrian Research Centre Seibersdorf on a VAX computer using VAX Fortran. OECOSYS allows the prediction of the consequences after a large scale contamination event. During 1992, under contract to the Austrian Federal Ministry of Health, Sports and Consumer Protection, department III OECOSYS - in the version of 1989 - was implemented on PC's in Seibersdorf and the Ministry using OS/2 and Microsoft -Fortran. In March 1993, the Ministry ordered an update which had become necessary and the evaluation of two exercise scenarios. Since that time the prognosis model with its auxiliary program and communication facilities is kept on stand-by and yearly exercises are performed to maintain its readiness. The current report describes the implementation and adaption to Austrian conditions of the newly available EXCEL version of the German ECOSYS prognosis model as OECOSYS. (author)

  13. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  14. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    Science.gov (United States)

    Zhao, Gong-Bo; Li, Baojiu; Koyama, Kazuya

    2011-02-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu [Phys. Rev. DPRVDAQ1550-7998 78, 123524 (2008)10.1103/PhysRevD.78.123524] and Schmidt [Phys. Rev. DPRVDAQ1550-7998 79, 083518 (2009)10.1103/PhysRevD.79.083518], and extend the resolution up to k˜20h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  15. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    International Nuclear Information System (INIS)

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-01-01

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k∼20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  16. Overview of the geochemical code MINTEQ: applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Graham, M.J.; Peterson, S.R.

    1985-09-01

    The MINTEQ geochemical computer code, developed at Pacific Northwest Laboratory, integrates many of the capabilities of its two immediate predecessors, WATEQ3 and MINEQL. MINTEQ can be used to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments or the interaction of ground water with solidified low-level wastes. The code is capable of performing calculations of ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial solidified low-level wastes. The wastes being evaluated include power reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code is being upgraded before the geochemical modeling is performed. Thermodynamic data for cobalt, antimony, cerium, and cesium solid phases and aqueous species are being added to the database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the wastes predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partical field validation of the geochemical model. 28 refs

  17. Integrated assessment of adaptation to Climate change in Flevoland at the farm and regional level

    NARCIS (Netherlands)

    Wolf, J.; Mandryk, M.; Kanellopoulos, A.; Oort, van P.A.J.; Schaap, B.F.; Reidsma, P.; Ittersum, van M.K.

    2011-01-01

    A key objective of the AgriAdapt project is to assess climate change impacts on agriculture including adaptation at regional and farm type level in combination with market and technological changes. More specifically, the developed methodologies enable (a) the assessment of impacts, risks and

  18. Ground-based research on vestibular adaptation to g-level transitions

    NARCIS (Netherlands)

    Groen, Eric L.; Nooij, Suzanne A E; Bos, Jelte E.

    2008-01-01

    At TNO research is ongoing on neuro-vestibular adaptation to altered G-levels. It is well-known that during the first days in weightlessness 50-80% of all astronauts suffer from the Space Adaptation Syndrome (SAS), which involves space motion sickness, spatial disorientation and motion illusions.

  19. Flicker Adaptation of Low-Level Cortical Visual Neurons Contributes to Temporal Dilation

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Nonsensory factors, such as increased arousal and attention, have been thought to mediate this flicker-based temporal-dilation aftereffect. In this study, we provide evidence that adaptation of low-level cortical visual…

  20. Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications

    Directory of Open Access Journals (Sweden)

    Huy Le

    2017-09-01

    Full Text Available For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR. AR seamlessly superimposes virtual objects on to a real environment in three dimensions (3D and in real time. One of the most important parts that helps close the gap between virtuality and reality is the marker used in the AR system. While pictorial marker and bar-code marker are the two most commonly used marker types in the market, they have some disadvantages in visual and processing performance. In this paper, we present a novelty method that combines the bar-code with the original feature of a colour picture (e.g., photos, trading cards, advertisement’s figure. Our method decorates on top of the original pictorial images additional features with a single stereogram image that optically conceals a multi-level (3D bar-code. Thus, it has a larger capability of storing data compared to the general 1D barcode. This new type of marker has the potential of addressing the issues that the current types of marker are facing. It not only keeps the original information of the picture but also contains encoded numeric information. In our limited evaluation, this pictorial bar-code shows a relatively robust performance under various conditions and scaling; thus, it provides a promising AR approach to be used in many applications such as trading card games, educations, and advertisements.

  1. The PSACOIN level 1B exercise: A probabilistic code intercomparison involving a four compartment biosphere model

    International Nuclear Information System (INIS)

    Klos, R.A.; Sinclair, J.E.; Torres, C.; Mobbs, S.F.; Galson, D.A.

    1991-01-01

    The probabilistic Systems Assessment Code (PSAC) User Group of the OECD Nuclear Energy Agency has organised a series of code intercomparison studies of relevance to the performance assessment of underground repositories for radioactive wastes - known collectively by the name PSACOIN. The latest of these to be undertaken is designated PSACOIN Level 1b, and the case specification provides a complete assessment model of the behaviour of radionuclides following release into the biosphere. PSACOIN Level 1b differs from other biosphere oriented intercomparison exercises in that individual dose is the end point of the calculations as opposed to any other intermediate quantity. The PSACOIN Level 1b case specification describes a simple source term which is used to simulate the release of activity to the biosphere from certain types of near surface waste repository, the transport of radionuclides through the biosphere and their eventual uptake by humankind. The biosphere sub model comprises 4 compartments representing top and deep soil layers, river water and river sediment. The transport of radionuclides between the physical compartments is described by ten transfer coefficients and doses to humankind arise from the simultaneous consumption of water, fish, meat, milk, and grain as well as from dust inhalation and external γ-irradiation. The parameters of the exposure pathway sub model are chosen to be representative of an individual living in a small agrarian community. (13 refs., 3 figs., 2 tabs.)

  2. Conception and development of an adaptive energy mesher for multigroup library generation of the transport codes

    International Nuclear Information System (INIS)

    Mosca, P.

    2009-12-01

    The deterministic transport codes solve the stationary Boltzmann equation in a discretized energy formalism called multigroup. The transformation of continuous data in a multigroup form is obtained by averaging the highly variable cross sections of the resonant isotopes with the solution of the self-shielding models and the remaining ones with the coarse energy spectrum of the reactor type. So far the error of such an approach could only be evaluated retrospectively. To remedy this, we studied in this thesis a set of methods to control a priori the accuracy and the cost of the multigroup transport computation. The energy mesh optimisation is achieved using a two step process: the creation of a reference mesh and its optimized condensation. In the first stage, by refining locally and globally the energy mesh, we seek, on a fine energy mesh with subgroup self-shielding, a solution equivalent to a reference solver (Monte Carlo or pointwise deterministic solver). In the second step, once fixed the number of groups, depending on the acceptable computational cost, and chosen the most appropriate self-shielding models to the reactor type, we look for the best bounds of the reference mesh minimizing reaction rate errors by the particle swarm optimization algorithm. This new approach allows us to define new meshes for fast reactors as accurate as the currently used ones, but with fewer groups. (author)

  3. [Limits of cardiac functional adaptation in "top level" resistance athletes].

    Science.gov (United States)

    Carù, B; Righetti, G; Bossi, M; Gerosa, C; Gazzotti, G; Maranetto, D

    2001-02-01

    Sports activity, particularly when performed at high level, provokes cardiovascular adjustments depending on the type of sport and on the level of the load. We evaluated 15 athletes from the Italian national team during a non-agonistic period of cross country skiing, with non-invasive tests including exercise test, color Doppler echocardiography, Holter monitoring, physical examination and standard rest electrocardiogram. Physical examination, rest electrocardiogram, exercise testing and echocardiography were all within the range of the expected values for this type of subjects. Holter monitoring recorded during the periods of agonistic activity revealed significant hypokinetic arrhythmias such as severe bradycardia, pauses, I and II degree atrioventricular blocks, and complete atrioventricular block in 2 cases; these features were not observed on Holter monitoring recorded during the non-agonistic period. The perfect health status of subjects and their racing results may bring about physiological functional adjustments, but these observations suggest the need for a follow-up to evaluate possible pathologic outcomes.

  4. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    Science.gov (United States)

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  5. Adaptability: Components of the Adaptive Competency for U.S. Army Direct and Organizational Level Leaders

    National Research Council Canada - National Science Library

    Wyszynski, Joseph L

    2005-01-01

    U.S. Army direct and organizational level leaders faced challenges in Operation ENDURING FREEDOM and Operation IRAQI FREEDOM, which combined to create an environment permeated by ambiguity and replete with uncertainty...

  6. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Umapathy Karthikeyan

    2007-01-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  7. Perceptual Coding of Audio Signals Using Adaptive Time-Frequency Transform

    Directory of Open Access Journals (Sweden)

    Karthikeyan Umapathy

    2007-08-01

    Full Text Available Wide band digital audio signals have a very high data-rate associated with them due to their complex nature and demand for high-quality reproduction. Although recent technological advancements have significantly reduced the cost of bandwidth and miniaturized storage facilities, the rapid increase in the volume of digital audio content constantly compels the need for better compression algorithms. Over the years various perceptually lossless compression techniques have been introduced, and transform-based compression techniques have made a significant impact in recent years. In this paper, we propose one such transform-based compression technique, where the joint time-frequency (TF properties of the nonstationary nature of the audio signals were exploited in creating a compact energy representation of the signal in fewer coefficients. The decomposition coefficients were processed and perceptually filtered to retain only the relevant coefficients. Perceptual filtering (psychoacoustics was applied in a novel way by analyzing and performing TF specific psychoacoustics experiments. An added advantage of the proposed technique is that, due to its signal adaptive nature, it does not need predetermined segmentation of audio signals for processing. Eight stereo audio signal samples of different varieties were used in the study. Subjective (mean opinion score—MOS listening tests were performed and the subjective difference grades (SDG were used to compare the performance of the proposed coder with MP3, AAC, and HE-AAC encoders. Compression ratios in the range of 8 to 40 were achieved by the proposed technique with subjective difference grades (SDG ranging from –0.53 to –2.27.

  8. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  9. Low Level Waste Conceptual Design Adaption to Poor Geological Conditions

    International Nuclear Information System (INIS)

    Bell, J.; Drimmer, D.; Giovannini, A.; Manfroy, P.; Maquet, F.; Schittekat, J.; Van Cotthem, A.; Van Echelpoel, E.

    2002-01-01

    Since the early eighties, several studies have been carried out in Belgium with respect to a repository for the final disposal of low-level radioactive waste (LLW). In 1998, the Belgian Government decided to restrict future investigations to the four existing nuclear sites in Belgium or sites that might show interest. So far, only two existing nuclear sites have been thoroughly investigated from a geological and hydrogeological point of view. These sites are located in the North-East (Mol-Dessel) and in the mid part (Fleurus-Farciennes) of the country. Both sites have the disadvantage of presenting poor geological and hydrogeological conditions, which are rather unfavorable to accommodate a surface disposal facility for LLW. The underground of the Mol-Dessel site consists of neogene sand layers of about 180 m thick which cover a 100 meters thick clay layer. These neogene sands contain, at 20 m depth, a thin clayey layer. The groundwater level is quite close to the surface (0-2m) and finally, the topography is almost totally flat. The upper layer of the Fleurus-Farciennes site consists of 10 m silt with poor geomechanical characteristics, overlying sands (only a few meters thick) and Westphalian shales between 15 and 20 m depth. The Westphalian shales are tectonized and strongly weathered. In the past, coal seams were mined out. This activity induced locally important surface subsidence. For both nuclear sites that were investigated, a conceptual design was made that could allow any unfavorable geological or hydrogeological conditions of the site to be overcome. In Fleurus-Farciennes, for instance, the proposed conceptual design of the repository is quite original. It is composed of a shallow, buried concrete cylinder, surrounded by an accessible concrete ring, which allows permanent inspection and control during the whole lifetime of the repository. Stability and drainage systems should be independent of potential differential settlements an d subsidences

  10. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    Science.gov (United States)

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. The adaptive response of E. coli to low levels of alkylating agent

    International Nuclear Information System (INIS)

    Jeggo, P.; Defais, M.; Samson, L.; Schendel, P.

    1978-01-01

    In an attempt to characterise which gene products may be involved in the repair system induced in E. coli by growth on low levels of alkylating agent (the adaptive response) we have analysed mutants deficient in other known pathways of DNA repair for the ability to adapt to MNNG. Adaptive resistance to the killing effects of MNNG seems to require a functional DNA polymerase I whereas resistance to the mutagenic effects can occur in polymerase I deficient strains; similarly killing adaptation could not be observed in a dam3 mutant, which was nonetheless able to show mutational adaptation. These results suggest that these two parts of the adaptive response must, at least to some extent, be separable. Both adaptive responses can be seen in the absence of uvrD + uvrE + -dependent mismatch repair, DNA polymerase II activity, or recF-mediated recombination and they are not affected by decreased levels of adenyl cyclase. The data presented support our earlier conclusion that adaptive resistance to the killing and mutagenic effect of MNNG is the result of previously uncharacterised repair pathways. (orig.) [de

  12. Farm Level Adaptation to Climate Change: The Case of Farmer's in the Ethiopian Highlands

    Science.gov (United States)

    Gebrehiwot, Tagel; van der Veen, Anne

    2013-07-01

    In Ethiopia, climate change and associated risks are expected to have serious consequences for agriculture and food security. This in turn will seriously impact on the welfare of the people, particularly the rural farmers whose main livelihood depends on rain-fed agriculture. The level of impacts will mainly depend on the awareness and the level of adaptation in response to the changing climate. It is thus important to understand the role of the different factors that influence farmers' adaptation to ensure the development of appropriate policy measures and the design of successful development projects. This study examines farmers' perception of change in climatic attributes and the factors that influence farmers' choice of adaptation measures to climate change and variability. The estimated results from the climate change adaptation models indicate that level of education, age and wealth of the head of the household; access to credit and agricultural services; information on climate, and temperature all influence farmers' choices of adaptation. Moreover, lack of information on adaptation measures and lack of finance are seen as the main factors inhibiting adaptation to climate change. These conclusions were obtained with a Multinomial logit model, employing the results from a survey of 400 smallholder farmers in three districts in Tigray, northern Ethiopian.

  13. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    Science.gov (United States)

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  14. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  15. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2014-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  16. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    Austad, S. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Guillen, L. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McKnight, C. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ferguson, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  17. Study on application of adaptive fuzzy control and neural network in the automatic leveling system

    Science.gov (United States)

    Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng

    2015-04-01

    This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.

  18. Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive Neuro-Fuzzy Inference System (ANFIS).

    Science.gov (United States)

    Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman

    2018-01-17

    The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were

  19. QR-codes as a tool to increase physical activity level among school children during class hours

    DEFF Research Database (Denmark)

    Christensen, Jeanette Reffstrup; Kristensen, Allan; Bredahl, Thomas Viskum Gjelstrup

    the students physical activity level during class hours. Methods: A before-after study was used to examine 12 students physical activity level, measured with pedometers for six lessons. Three lessons of traditional teaching and three lessons, where QR-codes were used to make orienteering in school area...... as old fashioned. The students also felt positive about being physically active in teaching. Discussion and conclusion: QR-codes as a tool for teaching are usable for making students more physically active in teaching. The students were exited for using QR-codes and they experienced a good motivation......QR-codes as a tool to increase physical activity level among school children during class hours Introduction: Danish students are no longer fulfilling recommendations for everyday physical activity. Since August 2014, Danish students in public schools are therefore required to be physically active...

  20. Multi-level adaptive simulation of transient two-phase flow in heterogeneous porous media

    KAUST Repository

    Chueh, C.C.

    2010-10-01

    An implicit pressure and explicit saturation (IMPES) finite element method (FEM) incorporating a multi-level shock-type adaptive refinement technique is presented and applied to investigate transient two-phase flow in porous media. Local adaptive mesh refinement is implemented seamlessly with state-of-the-art artificial diffusion stabilization allowing simulations that achieve both high resolution and high accuracy. Two benchmark problems, modelling a single crack and a random porous medium, are used to demonstrate the robustness of the method and illustrate the capabilities of the adaptive refinement technique in resolving the saturation field and the complex interaction (transport phenomena) between two fluids in heterogeneous media. © 2010 Elsevier Ltd.

  1. Radiological analyses of intermediate and low level supercompacted waste drums by VQAD code

    International Nuclear Information System (INIS)

    Bace, M.; Trontl, K.; Gergeta, K.

    2004-01-01

    In order to increase the possibilities of the QAD-CGGP code, as well as to make the code more user friendly, modifications of the code have been performed. A general multisource option has been introduced into the code and a user friendly environment has been created through a Graphical User Interface. The improved version of the code has been used to calculate gamma dose rates of a single supercompacted waste drum and a pair of supercompacted waste drums. The results of the calculation were compared with the standard QAD-CGGP results. (author)

  2. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  3. Reduction and resource recycling of high-level radioactive wastes through nuclear transmutation with PHITS code

    International Nuclear Information System (INIS)

    Fujita, Reiko

    2017-01-01

    In the ImPACT program of the Cabinet Office, programs are underway to reduce long-lived fission products (LLFP) contained in high-level radioactive waste through nuclear transmutation, or to recycle/utilize useful nuclear species. This paper outlines this program and describes recent achievements. This program consists of five projects: (1) separation/recovery technology, (2) acquisition of nuclear transmutation data, (3) nuclear reaction theory model and simulation, (4) novel nuclear reaction control and development of elemental technology, and (5) discussions on process concept. The project (1) develops a technology for dissolving vitrified solid, a technology for recovering LLFP from high-level waste liquid, and a technology for separating odd and even lasers. Project (2) acquires the new nuclear reaction data of Pd-107, Zr-93, Se-79, and Cs-135 using RIKEN's RIBF or JAEA's J-PARC. Project (3) improves new nuclear reaction theory and structural model using the nuclear reaction data measured in (2), improves/upgrades nuclear reaction simulation code PHITS, and proposes a promising nuclear transmutation pathway. Project (4) develops an accelerator that realizes the proposed transmutation route and its elemental technology. Project (5) performs the conceptual design of the process to realize (1) to (4), and constructs the scenario of reducing/utilizing high-level radioactive waste to realize this design. (A.O.)

  4. Adaptation

    International Development Research Centre (IDRC) Digital Library (Canada)

    building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.

  5. A Climate Change Adaptation Planning Process for Low-Lying, Communities Vulnerable to Sea Level Rise

    Directory of Open Access Journals (Sweden)

    Kristi Tatebe

    2012-09-01

    Full Text Available While the province of British Columbia (BC, Canada, provides guidelines for flood risk management, it is local governments’ responsibility to delineate their own flood vulnerability, assess their risk, and integrate these with planning policies to implement adaptive action. However, barriers such as the lack of locally specific data and public perceptions about adaptation options mean that local governments must address the need for adaptation planning within a context of scientific uncertainty, while building public support for difficult choices on flood-related climate policy and action. This research demonstrates a process to model, visualize and evaluate potential flood impacts and adaptation options for the community of Delta, in Metro Vancouver, across economic, social and environmental perspectives. Visualizations in 2D and 3D, based on hydrological modeling of breach events for existing dike infrastructure, future sea level rise and storm surges, are generated collaboratively, together with future adaptation scenarios assessed against quantitative and qualitative indicators. This ‘visioning package’ is being used with staff and a citizens’ Working Group to assess the performance, policy implications and social acceptability of the adaptation strategies. Recommendations based on the experience of the initiative are provided that can facilitate sustainable future adaptation actions and decision-making in Delta and other jurisdictions.

  6. Radio-adaptation: cellular and molecular features of a response to low levels of ionizing radiation

    International Nuclear Information System (INIS)

    Rigaud, O.

    1998-01-01

    It is well established that sublethal doses of DNA damaging agents induce protective mechanisms against a subsequent high dose treatment ; for instance, the phenomenon of radio-adaptation in the case of ionizing radiations. Since the early observation described in 1984, numerous studies have confirmed the radio-adaptive response in terms of reduction of chromosomal breaks for varied biological models in vitro and in vivo. Evidence for an adaptive response against the induction of gene mutations and the lethal effect is clearly demonstrated. This paper reviews the experimental results describing various aspects of these adaptive responses expressed on these different biological end-points. The molecular mechanism underlying radio-adaptation still remains nuclear. The development of this phenomenon requires de novo synthesis of transcripts and proteins during the time interval between the two doses. Some data are consistent with the hypotheses that these gene products would be involved in the activation of DNA repair pathways and antioxidant systems. However, a major question still remains unanswered; indeed, it is not clear whether or not the radio-adaptation could affect the estimation of cancer risk related with low level exposure to ionizing radiation, a major concern in radioprotection. Until such data are available, it is yet unwise to evoke the beneficial effects of radio-adaptation. (authors)

  7. Development of standards, codes of practice and guidelines at the national level

    International Nuclear Information System (INIS)

    Swindon, T.N.

    1989-01-01

    Standards, codes of practice and guidelines are defined and their different roles in radiation protection specified. The work of the major bodies that develop such documents in Australia - the National Health and Medical Research Council and the Standards Association of Australia - is discussed. The codes of practice prepared under the Environment Protection (Nuclear Codes) Act, 1978, an act of the Australian Federal Parliament, are described and the guidelines associated with them outlined. 5 refs

  8. Self-adaptive phosphor coating technology for wafer-level scale chip packaging

    International Nuclear Information System (INIS)

    Zhou Linsong; Rao Haibo; Wang Wei; Wan Xianlong; Liao Junyuan; Wang Xuemei; Zhou Da; Lei Qiaolin

    2013-01-01

    A new self-adaptive phosphor coating technology has been successfully developed, which adopted a slurry method combined with a self-exposure process. A phosphor suspension in the water-soluble photoresist was applied and exposed to LED blue light itself and developed to form a conformal phosphor coating with self-adaptability to the angular distribution of intensity of blue light and better-performing spatial color uniformity. The self-adaptive phosphor coating technology had been successfully adopted in the wafer surface to realize a wafer-level scale phosphor conformal coating. The first-stage experiments show satisfying results and give an adequate demonstration of the flexibility of self-adaptive coating technology on application of WLSCP. (semiconductor devices)

  9. Global cost analysis on adaptation to sea level rise based on RCP/SSP scenarios

    Science.gov (United States)

    Kumano, N.; Tamura, M.; Yotsukuri, M.; Kuwahara, Y.; Yokoki, H.

    2017-12-01

    Low-lying areas are the most vulnerable to sea level rise (SLR) due to climate change in the future. In order to adapt to SLR, it is necessary to decide whether to retreat from vulnerable areas or to install dykes to protect them from inundation. Therefore, cost- analysis of adaptation using coastal dykes is one of the most essential issues in the context of climate change and its countermeasures. However, few studies have globally evaluated the future costs of adaptation in coastal areas. This study tries to globally analyze the cost of adaptation in coastal areas. First, global distributions of projected inundation impacts induced by SLR including astronomical high tide were assessed. Economic damage was estimated on the basis of the econometric relationship between past hydrological disasters, affected population, and per capita GDP using CRED's EM-DAT database. Second, the cost of adaptation was also determined using the cost database and future scenarios. The authors have built a cost database for installed coastal dykes worldwide and applied it to estimating the future cost of adaptation. The unit costs of dyke construction will increase with socio-economic scenario (SSP) such as per capita GDP. Length of vulnerable coastline is calculated by identifying inundation areas using ETOPO1. Future cost was obtained by multiplying the length of vulnerable coastline and the unit cost of dyke construction. Third, the effectiveness of dyke construction was estimated by comparing cases with and without adaptation.As a result, it was found that incremental adaptation cost is lower than economic damage in the cases of SSP1 and SSP3 under RCP scenario, while the cost of adaptation depends on the durability of the coastal dykes.

  10. Climate change vulnerability, adaptation and risk perceptions at farm level in Punjab, Pakistan.

    Science.gov (United States)

    Abid, Muhammad; Schilling, Janpeter; Scheffran, Jürgen; Zulfiqar, Farhad

    2016-03-15

    Pakistan is among the countries highly exposed and vulnerable to climate change. The country has experienced many severe floods, droughts and storms over the last decades. However, little research has focused on the investigation of vulnerability and adaptation to climate-related risks in Pakistan. Against this backdrop, this article investigates the farm level risk perceptions and different aspects of vulnerability to climate change including sensitivity and adaptive capacity at farm level in Pakistan. We interviewed a total of 450 farming households through structured questionnaires in three districts of Punjab province of Pakistan. This study identified a number of climate-related risks perceived by farm households such as extreme temperature events, insect attacks, animal diseases and crop pests. Limited water availability, high levels of poverty and a weak role of local government in providing proper infrastructure were the factors that make farmers more sensitive to climate-related risks. Uncertainty or reduction in crop and livestock yields; changed cropping calendars and water shortage were the major adverse impacts of climate-related risks reported by farmers in the study districts. Better crop production was reported as the only positive effect. Further, this study identified a number of farm level adaptation methods employed by farm households that include changes in crop variety, crop types, planting dates and input mix, depending upon the nature of the climate-related risks. Lack of resources, limited information, lack of finances and institutional support were some constraints that limit the adaptive capacity of farm households. This study also reveals a positive role of cooperation and negative role of conflict in the adaptation process. The study suggests to address the constraints to adaptation and to improve farm level cooperation through extended outreach and distribution of institutional services, particularly climate-specific farm advisory

  11. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    Science.gov (United States)

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the

  12. Operationalizing analysis of micro-level climate change vulnerability and adaptive capacity

    DEFF Research Database (Denmark)

    Jiao, Xi; Moinuddin, Hasan

    2016-01-01

    This paper explores vulnerability and adaptive capacity of rural communities in Southern Laos, where households are highly dependent on climate-sensitive natural resources and vulnerable to seasonal weather fluctuations. The speed and magnitude of climate-induced changes may seriously challenge...... their ability to adapt. Participatory group discussions and 271 household surveys in three villages highlight the current level of vulnerability and adaptive capacity towards climatic variability and risks. This paper visualizes three dimensions of the vulnerability framework at two levels using the Community...... Climate Vulnerability Index and household climate vulnerability cube. Results show that not only poor households are most at risk from climate change challenges, but also those better-off households highly dependent on specialized agricultural production are locally exposed to climate change risks...

  13. Development of SAGE, A computer code for safety assessment analyses for Korean Low-Level Radioactive Waste Disposal

    International Nuclear Information System (INIS)

    Zhou, W.; Kozak, Matthew W.; Park, Joowan; Kim, Changlak; Kang, Chulhyung

    2002-01-01

    This paper describes a computer code, called SAGE (Safety Assessment Groundwater Evaluation) to be used for evaluation of the concept for low-level waste disposal in the Republic of Korea (ROK). The conceptual model in the code is focused on releases from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. Doses can be calculated for several biosphere systems including drinking contaminated groundwater, and subsequent contamination of foods, rivers, lakes, or the ocean by that groundwater. The flexibility of the code will permit both generic analyses in support of design and site development activities, and straightforward modification to permit site-specific and design-specific safety assessments of a real facility as progress is made toward implementation of a disposal site. In addition, the code has been written to easily interface with more detailed codes for specific parts of the safety assessment. In this way, the code's capabilities can be significantly expanded as needed. The code has the capability to treat input parameters either deterministic ally or probabilistic ally. Parameter input is achieved through a user-friendly Graphical User Interface.

  14. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Executive summary

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails

  15. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  16. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, B.L. [Oak Ridge National Lab., TN (United States); Sartori, E. [OCDE/OECD NEA Data Bank, Issy-les-Moulineaux (France); Viedma, L.G. de [Consejo de Seguridad Nuclear, Madrid (Spain)

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  17. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    International Nuclear Information System (INIS)

    Kirk, B.L.; Sartori, E.; Viedma, L.G. de

    1997-01-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee's Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community's computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management

  18. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  19. Integrated assessment of farm level adaptation to climate change in agriculture

    NARCIS (Netherlands)

    Mandryk, M.

    2016-01-01

    The findings of the thesis allowed assessing plausible futures of agriculture in Flevoland around 2050 with insights in effective adaptation to climate change at different levels. Besides empirical findings, this thesis contributed methodologically to the portfolio of climate change impact and

  20. Adaptation to the Impacts of Sea Level Rise in the Nile Delta Coastal ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Extrants. Articles de revue. Facing the Tide - REVOLVE Magazine: Water Around the Mediterranean. Téléchargez le PDF. Rapports. Adaptation to the impacts of sea level rise in the Nile Delta coastal zone, Egypt : final project report. Téléchargez le PDF ...

  1. Cities and Sea Level Rise: A Roadmap for Flood Hazard Adaptation

    Science.gov (United States)

    Horn, Diane; Cousins, Ann

    2016-04-01

    Coastal cities will face a range of increasingly severe challenges as sea level rises, and adaptation to future flood risk will require more than structural defences. Many cities will not be able to rely solely on engineering structures for protection and will need to develop a suite of policy responses to increase their resilience to impacts of rising sea level. The tools to promote flood risk adaptation are already within the capacity of most cities, with an assortment of policy tools available to address other land-use problems which can be refashioned and used to adapt to sea level rise. This study reviews approaches for urban adaptation through detailed analyses of case studies of cities which have developed flood adaptation strategies that combine structural defences with innovative approaches to living with flood risk. The aim of the overall project is to produce a 'roadmap' to guide practitioners through the process of analysing coastal flood risk in urban areas. Methodologies and tools to estimate vulnerability to coastal flooding, damages suffered, and the assessment of flood defences and adaptation measures are complemented with a discussion on the essential impact that local policy has on the treatment of coastal flooding and the constraints and opportunities that result from the specific country or locality characteristics in relation to economic, political, social and environmental priorities, which are likely to dictate the approach to coastal flooding and the actions proposed. Case studies of adaptation strategies used by Rotterdam, Bristol, Ho Chi Minh City and Norfolk, Virginia, are used to draw out a range of good practice elements that promote effective adaptation to sea level rise. These can be grouped into risk reduction, governance issues, and insurance, and can be used to provide examples of how other cities could adopt and implement flood adaptation strategies from a relatively limited starting position. Most cities will neither be able to

  2. Adaptive EMG noise reduction in ECG signals using noise level approximation

    Science.gov (United States)

    Marouf, Mohamed; Saranovac, Lazar

    2017-12-01

    In this paper the usage of noise level approximation for adaptive Electromyogram (EMG) noise reduction in the Electrocardiogram (ECG) signals is introduced. To achieve the adequate adaptiveness, a translation-invariant noise level approximation is employed. The approximation is done in the form of a guiding signal extracted as an estimation of the signal quality vs. EMG noise. The noise reduction framework is based on a bank of low pass filters. So, the adaptive noise reduction is achieved by selecting the appropriate filter with respect to the guiding signal aiming to obtain the best trade-off between the signal distortion caused by filtering and the signal readability. For the evaluation purposes; both real EMG and artificial noises are used. The tested ECG signals are from the MIT-BIH Arrhythmia Database Directory, while both real and artificial records of EMG noise are added and used in the evaluation process. Firstly, comparison with state of the art methods is conducted to verify the performance of the proposed approach in terms of noise cancellation while preserving the QRS complex waves. Additionally, the signal to noise ratio improvement after the adaptive noise reduction is computed and presented for the proposed method. Finally, the impact of adaptive noise reduction method on QRS complexes detection was studied. The tested signals are delineated using a state of the art method, and the QRS detection improvement for different SNR is presented.

  3. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  4. Cities and Sea Level Rise: A Roadmap for Flood Hazard Adaptation

    Science.gov (United States)

    Horn, D. P.; Cousins, A.

    2015-12-01

    Coastal cities will face a range of increasingly severe challenges as sea level rises, and adaptation to future flood risk will require more than structural defences. Many cities will not be able to rely solely on engineering structures for protection and will need to develop a suite of policy responses to increase their resilience to impacts of rising sea level. Local governments generally maintain day-to-day responsibility and control over the use of the vast majority of property at risk of flooding, and the tools to promote flood risk adaptation are already within the capacity of most cities. Policy tools available to address other land-use problems can be refashioned and used to adapt to sea level rise. This study reviews approaches for urban adaptation through case studies of cities which have developed flood adaptation strategies that combine structural defences with innovative approaches to living with flood risk. The aim of the overall project is to produce a 'roadmap' to guide practitioners through the process of analysing coastal flood risk in urban areas. Technical knowledge of flood risk reduction measures is complemented with a consideration of the essential impact that local policy has on the treatment of coastal flooding and the constraints and opportunities that result from the specific country or locality characteristics in relation to economic, political, social and environmental priorities, which are likely to dictate the approach to coastal flooding and the actions proposed. Detailed analyses of the adaptation strategies used by Rotterdam (Netherlands), Bristol (UK), and Norfolk (Virginia) are used to draw out a range of good practice elements that promote effective adaptation to sea level rise. These can be grouped into risk reduction, governance issues, and insurance, and can be used to provide examples of how other cities could adopt and implement flood adaptation strategies from a relatively limited starting position. Most cities will

  5. [Adaptation of self-image level and defense mechanisms in elderly patients with complicated stoma].

    Science.gov (United States)

    Ortiz-Rivas, Miriam Karina; Moreno-Pérez, Norma Elvira; Vega-Macías, Héctor Daniel; Jiménez-González, María de Jesús; Navarro-Elías, María de Guadalupe

    2014-01-01

    Ostomy patients face a number of problems that impact negatively on their personal welfare. The aim of this research is determine the nature and intensity of the relationship between the level of self-concept adaptive mode and the consistent use of coping strategies of older adults with a stoma. Quantitative, correlational and transversal. VIVEROS 03 and CAPS surveys were applied in 3 hospitals in the City of Durango, México. The study included 90 older adults with an intestinal elimination stoma with complications. Kendall's Tau-b coefficient was the non-parametric test used to measure this association. Most older adults analyzed (61.3 < % < 79.9) are not completely adapted to the condition of living with an intestinal stoma. There is also a moderate positive correlation (0,569) between the level of adaptation of the older adults with a stoma and the conscious use of coping strategies. The presence of an intestinal stoma represents a physical and psychological health problem that is reflected in the level of adaptation of the self-image. Elderly people with a stoma use only a small part of defense mechanisms as part of coping process. This limits their ability to face the adversities related to their condition, potentially causing major health complications. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  6. Influence of physical education on the level of adaptation of students to educational activity.

    Directory of Open Access Journals (Sweden)

    Korolinska S.V.

    2012-06-01

    Full Text Available Examined and summarized problems of adaptation of students to educational activity. 100 students took part in research. Found out a row socially psychological factors which determine efficiency of process of adaptation of students to the scientific process. Practical recommendations are developed on organization of educational process of students. It is recommended widely to utillize a physical culture as mean of reduction of adaptation period and increase of level of physical and mental capacity. It is marked that almost 90% students have rejections in a health. Also over 50% - unsatisfactory physical preparedness. It is set that for the students of the II course the indexes of low situation anxiety prevail as compared to the I course. It is set that the characteristic feature of the psychological state during an examination session is emotionally volitional instability.

  7. A multi-attribute approach to choosing adaptation strategies: Application to sea-level rise

    International Nuclear Information System (INIS)

    Smith, A.E.; Chu, H.Q.

    1994-01-01

    Selecting good adaptation strategies in anticipation of climate change is gaining increasing attention as it becomes increasingly clear that much of the likely change is already committed, and could not be avoided even with aggressive and immediate emissions reductions. Adaptation decision making will place special requirements on regional and local planners in the US and other countries, especially developing countries. Approaches, tools, and guidance will be useful to assist in an effective response to the challenge. This paper describes the value of using a multi-attribute approach for evaluating adaptation strategies and its implementation as a decision-support software tool to help planners understand and execute this approach. The multi-attribute approach described here explicitly addresses the fact that many aspects of the decision cannot be easily quantified, that future conditions are highly uncertain, and that there are issues of equity, flexibility, and coordination that may be as important to the decision as costs and benefits. The approach suggested also avoids trying to collapse information on all of the attributes to a single metric. Such metrics can obliterate insights about the nature of the trade-offs that must be made in choosing among very dissimilar types of responses to the anticipated threat of climate change. Implementation of such an approach requires management of much information, and an ability to easily manipulate its presentation while seeking acceptable trade-offs. The Adaptation Strategy Evaluator (ASE) was developed under funding from the US Environmental Protection Agency to provide user-friendly, PC-based guidance through the major steps of a multi-attribute evaluation. The initial application of ASE, and the focus of this paper, is adaptation to sea level rise. However, the approach can be easily adapted to any multi-attribute choice problem, including the range of other adaptation planning needs

  8. Plasma GLP-2 levels and intestinal markers in the juvenile pig during intestinal adaptation

    DEFF Research Database (Denmark)

    Paris, Monique C; Fuller, Peter J; Carstensen, Bendix

    2004-01-01

    ) or supplemented either with fiber (n = 6) or with bovine colostrum protein concentrate (CPC; n = 10) for 8 weeks until sacrifice. Plasma GLP-2 levels were measured at weeks 0, 2, 4, and 8 postoperatively. In addition, end-stage parameters were studied at week 8 including weight gain, ileal villus height, crypt......Adaptation of the residual small bowel following resection is dependent on luminal and humoral factors. We aimed to establish if circulating levels of glucagon-like peptide (GLP-2) change under different dietary regimens following resection and to determine if there is a relationship between plasma...... GLP-2 levels and markers of intestinal adaptation. Four-week-old piglets underwent a 75% proximal small bowel resection (n = 31) or transection (n = 14). Postoperatively they received either pig chow (n = 14), nonpolymeric (elemental) infant formula (n = 7), or polymeric infant formula alone (n = 8...

  9. Scoping Adaptation Needs for Smallholders in the Brazilian Amazon: A Municipal Level Case Study

    Directory of Open Access Journals (Sweden)

    Osuna Vanesa Rodríguez

    2014-05-01

    Full Text Available Over the past decade, several climate extreme events have caused considerable economic damage and hardship in the Brazilian Amazon region, especially for small-scale producers. Based on household surveys and focus group interviews in the Municipality of Alenquer as well as secondary data analyses and a literature review at the regional level, this study seeks to assess rural small-scale producers’ vulnerability to climate and non-climate related shocks and identify entry points for government action to support adaptation at the local level. In our case study area, small-scale producers with similar wealth, self-sufficiency, and resource use specialisation levels exhibited stark variation in levels of sensitivity and adaptive capacity to climate and nonclimate related shocks. Our findings indicate that this variation is partly driven by cultural, historical, and environmental resource use specialisation strategies and partly by differences in local governance capacity and the level of social organisation. Emerging governmentled initiatives to promote climate change adaptation in the region would benefit from taking these factors into account when designing local implementation strategies and priorities.

  10. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    Science.gov (United States)

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  11. Adaption, validation and application of advanced codes with 3-dimensional neutron kinetics for accident analysis calculations - STC with Bulgaria

    International Nuclear Information System (INIS)

    Grundmann, U.; Kliem, S.; Mittag, S.; Rohde, U.; Seidel, A.; Panayotov, D.; Ilieva, B.

    2001-08-01

    In the frame of a project on scientific-technical co-operation funded by BMBF/BMWi, the program code DYN3D and the coupled code ATHLET-DYN3D have been transferred to the Institute for Nuclear Research and Nuclear Energy (INRNE) Sofia. The coupled code represents an implementation of the 3D core model DYN3D developed by FZR into the GRS thermal-hydraulics code system ATHLET. For the purpose of validation of these codes, a measurement data base about a start-up experiment obtained at the unit 6 of Kozloduy NPP (VVER-1000/V-320) has been generated. The results of performed validation calculations were compared with measurement values from the data base. A simplified model for estimation of cross flow mixing between fuel assemblies has been implemented into the program code DYN3D by Bulgarian experts. Using this cross flow model, transient processes with asymmetrical boundary conditions can be analysed more realistic. The validation of the implemented model were performed with help of comparison calculations between modified DYD3D code and thermal-hydraulics code COBRA-4I, and also on the base of the collected measurement data from Kozloduy NPP. (orig.) [de

  12. Development of a computer code for low-and intermediate-level radioactive waste disposal safety assessment

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2002-01-01

    A safety assessment code, called SAGE (Safety Assessment Groundwater Evaluation), has been developed to describe post-closure radionuclide releases and potential radiological doses for low- and intermediate-level radioactive waste (LILW) disposal in an engineered vault facility in Korea. The conceptual model implemented in the code is focused on the release of radionuclide from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. The radionuclide transport equations are solved by spatially discretizing the disposal system into a series of compartments. Mass transfer between compartments is by diffusion/dispersion and advection. In all compartments, radionuclides are decayed either as a single-member chain or as multi-member chains. The biosphere is represented as a set of steady-state, radionuclide-specific pathway dose conversion factors that are multiplied by the appropriate release rate from the far field for each pathway. The code has the capability to treat input parameters either deterministically or probabilistically. Parameter input is achieved through a user-friendly Graphical User Interface. An application is presented, which is compared against safety assessment results from the other computer codes, to benchmark the reliability of system-level conceptual modeling of the code

  13. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  14. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  15. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  16. Data-adaptive harmonic analysis and prediction of sea level change in North Atlantic region

    Science.gov (United States)

    Kondrashov, D. A.; Chekroun, M.

    2017-12-01

    This study aims to characterize North Atlantic sea level variability across the temporal and spatial scales. We apply recently developed data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) stochastic modeling techniques [Chekroun and Kondrashov, 2017] to monthly 1993-2017 dataset of Combined TOPEX/Poseidon, Jason-1 and Jason-2/OSTM altimetry fields over North Atlantic region. The key numerical feature of the DAH relies on the eigendecomposition of a matrix constructed from time-lagged spatial cross-correlations. In particular, eigenmodes form an orthogonal set of oscillating data-adaptive harmonic modes (DAHMs) that come in pairs and in exact phase quadrature for a given temporal frequency. Furthermore, the pairs of data-adaptive harmonic coefficients (DAHCs), obtained by projecting the dataset onto associated DAHMs, can be very efficiently modeled by a universal parametric family of simple nonlinear stochastic models - coupled Stuart-Landau oscillators stacked per frequency, and synchronized across different frequencies by the stochastic forcing. Despite the short record of altimetry dataset, developed DAH-MSLM model provides for skillful prediction of key dynamical and statistical features of sea level variability. References M. D. Chekroun and D. Kondrashov, Data-adaptive harmonic spectra and multilayer Stuart-Landau models. HAL preprint, 2017, https://hal.archives-ouvertes.fr/hal-01537797

  17. Adapting high-level language programs for parallel processing using data flow

    Science.gov (United States)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  18. Possible impacts of sea level rise on disease transmission and potential adaptation strategies, a review.

    Science.gov (United States)

    Dvorak, Ana C; Solo-Gabriele, Helena M; Galletti, Andrea; Benzecry, Bernardo; Malone, Hannah; Boguszewski, Vicki; Bird, Jason

    2018-04-18

    Sea levels are projected to rise in response to climate change, causing the intrusion of sea water into land. In flat coastal regions, this would generate an increase in shallow water covered areas with limited circulation. This scenario raises a concern about the consequences it could have on human health, specifically the possible impacts on disease transmission. In this review paper we identified three categories of diseases which are associated with water and whose transmission can be affected by sea level rise. These categories include: mosquitoborne diseases, naturalized organisms (Vibrio spp. and toxic algae), and fecal-oral diseases. For each disease category, we propose comprehensive adaptation strategies that would help minimize possible health risks. Finally, the City of Key West, Florida is analyzed as a case study, due to its inherent vulnerability to sea level rise. Current and projected adaptation techniques are discussed as well as the integration of additional recommendations, focused on disease transmission control. Given that sea level rise will likely continue into the future, the promotion and implementation of positive adaptation strategies is necessary to ensure community resilience. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  20. Adaptive Backstepping Controller Design for Leveling Control of an Underwater Platform Based on Joint Space

    Directory of Open Access Journals (Sweden)

    Zhi-Lin Zeng

    2014-01-01

    Full Text Available This paper focuses on high precision leveling control of an underwater heavy load platform, which is viewed as an underwater parallel robot on the basis of its work pattern. The kinematic of platform with deformation is analyzed and the dynamics model of joint space is established. An adaptive backstepping controller according to Lyapunov's function is proposed for leveling control of platform based on joint space. Furthermore, the “lowest point fixed angle error” leveling scheme called “chase” is chosen for leveling control of platform. The digital simulation and practical experiment of single joint space actuator are carried out, and the results show high precision servo control of joint space. On the basis of this, the platform leveling control simulation relies on the hardware-in-loop system. The results indicate that the proposed controller can effectively restrain the influence from system parameter uncertainties and external disturbance to realize high precision leveling control of the underwater platform.

  1. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    Science.gov (United States)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  2. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  3. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  4. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  5. Manchester Coding Option for SpaceWire: Providing Choices for System Level Design

    Science.gov (United States)

    Rakow, Glenn; Kisin, Alex

    2014-01-01

    This paper proposes an optional coding scheme for SpaceWire in lieu of the current Data Strobe scheme for three reasons. First reason is to provide a straightforward method for electrical isolation of the interface; secondly to provide ability to reduce the mass and bend radius of the SpaceWire cable; and thirdly to provide a means for a common physical layer over which multiple spacecraft onboard data link protocols could operate for a wide range of data rates. The intent is to accomplish these goals without significant change to existing SpaceWire design investments. The ability to optionally use Manchester coding in place of the current Data Strobe coding provides the ability to DC balanced the signal transitions unlike the SpaceWire Data Strobe coding; and therefore the ability to isolate the electrical interface without concern. Additionally, because the Manchester code has the clock and data encoded on the same signal, the number of wires of the existing SpaceWire cable could be optionally reduced by 50. This reduction could be an important consideration for many users of SpaceWire as indicated by the already existing effort underway by the SpaceWire working group to reduce the cable mass and bend radius by elimination of shields. However, reducing the signal count by half would provide even greater gains. It is proposed to restrict the data rate for the optional Manchester coding to a fixed data rate of 10 Megabits per second (Mbps) in order to make the necessary changes simple and still able to run in current radiation tolerant Field Programmable Gate Arrays (FPGAs). Even with this constraint, 10 Mbps will meet many applications where SpaceWire is used. These include command and control applications and many instruments applications with have moderate data rate. For most NASA flight implementations, SpaceWire designs are in rad-tolerant FPGAs, and the desire to preserve the heritage design investment is important for cost and risk considerations. The

  6. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  7. Codon usage and expression level of human mitochondrial 13 protein coding genes across six continents.

    Science.gov (United States)

    Chakraborty, Supriyo; Uddin, Arif; Mazumder, Tarikul Huda; Choudhury, Monisha Nath; Malakar, Arup Kumar; Paul, Prosenjit; Halder, Binata; Deka, Himangshu; Mazumder, Gulshana Akthar; Barbhuiya, Riazul Ahmed; Barbhuiya, Masuk Ahmed; Devi, Warepam Jesmi

    2017-12-02

    The study of codon usage coupled with phylogenetic analysis is an important tool to understand the genetic and evolutionary relationship of a gene. The 13 protein coding genes of human mitochondria are involved in electron transport chain for the generation of energy currency (ATP). However, no work has yet been reported on the codon usage of the mitochondrial protein coding genes across six continents. To understand the patterns of codon usage in mitochondrial genes across six different continents, we used bioinformatic analyses to analyze the protein coding genes. The codon usage bias was low as revealed from high ENC value. Correlation between codon usage and GC3 suggested that all the codons ending with G/C were positively correlated with GC3 but vice versa for A/T ending codons with the exception of ND4L and ND5 genes. Neutrality plot revealed that for the genes ATP6, COI, COIII, CYB, ND4 and ND4L, natural selection might have played a major role while mutation pressure might have played a dominant role in the codon usage bias of ATP8, COII, ND1, ND2, ND3, ND5 and ND6 genes. Phylogenetic analysis indicated that evolutionary relationships in each of 13 protein coding genes of human mitochondria were different across six continents and further suggested that geographical distance was an important factor for the origin and evolution of 13 protein coding genes of human mitochondria. Copyright © 2017 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  8. Limits on the adaptability of coastal marshes to rising sea level

    Science.gov (United States)

    Kirwan, Matthew L.; Guntenspergen, Glenn R.; D'Alpaos, Andrea; Morris, James T.; Mudd, Simon M.; Temmerman, Stijn

    2010-01-01

    Assumptions of a static landscape inspire predictions that about half of the world's coastal wetlands will submerge during this century in response to sea-level acceleration. In contrast, we use simulations from five numerical models to quantify the conditions under which ecogeomorphic feedbacks allow coastal wetlands to adapt to projected changes in sea level. In contrast to previous sea-level assessments, we find that non-linear feedbacks among inundation, plant growth, organic matter accretion, and sediment deposition, allow marshes to survive conservative projections of sea-level rise where suspended sediment concentrations are greater than ~20 mg/L. Under scenarios of more rapid sea-level rise (e.g., those that include ice sheet melting), marshes will likely submerge near the end of the 21st century. Our results emphasize that in areas of rapid geomorphic change, predicting the response of ecosystems to climate change requires consideration of the ability of biological processes to modify their physical environment.

  9. High-Level Synthesis of DSP Applications Using Adaptive Negative Cycle Detection

    Directory of Open Access Journals (Sweden)

    Nitin Chandrachoodan

    2002-09-01

    Full Text Available The problem of detecting negative weight cycles in a graph is examined in the context of the dynamic graph structures that arise in the process of high level synthesis (HLS. The concept of adaptive negative cycle detection is introduced, in which a graph changes over time and negative cycle detection needs to be done periodically, but not necessarily after every individual change. We present an algorithm for this problem, based on a novel extension of the well-known Bellman-Ford algorithm that allows us to adapt existing cycle information to the modified graph, and show by experiments that our algorithm significantly outperforms previous incremental approaches for dynamic graphs. In terms of applications, the adaptive technique leads to a very fast implementation of Lawlers algorithm for the computation of the maximum cycle mean (MCM of a graph, especially for a certain form of sparse graph. Such sparseness often occurs in practical circuits and systems, as demonstrated, for example, by the ISCAS 89/93 benchmarks. The application of the adaptive technique to design-space exploration (synthesis is also demonstrated by developing automated search techniques for scheduling iterative data-flow graphs.

  10. Plasma. beta. -endorphin and stress hormone levels during adaptation and stress

    Energy Technology Data Exchange (ETDEWEB)

    Lishmanov, Yu.B.; Trifonova, Zh.V.; Tsibin, A.N.; Maslova, L.V.; Dement' eva, L.A.

    1987-09-01

    This paper describes a comparative study of ..beta..-endorphin and stress hormone levels in the blood plasma of rats during stress and adaptation. Immunoreactive ..beta..-endorphin in the blood plasma was assayed by means of a kit after preliminary isolation of the ..beta..-endorphin fraction by affinity chromatography on sepharose; ACTH was assayed with a kit and cortisol, insulin, thyroxine and tri-iodothyronine by means of kits from Izotop. Determination of plasma levels of ..beta..-endorphin and other opioids could evidently be an important method of assessing the state of resistance of the organism to stress.

  11. Demonstration of a Concurrently Programmed Tactical Level Control Software for Autonomous Vehicles and the Interface to the Execution Level Code

    National Research Council Canada - National Science Library

    Carroll, William

    2000-01-01

    .... One of the greatest challenges to the successful development of truly autonomous vehicles is the ability to link logically based high-level mission planning with low-level vehicle control software...

  12. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  13. Changes in BOLD and ADC weighted imaging in acute hypoxia during sea-level and altitude adapted states

    DEFF Research Database (Denmark)

    Rostrup, Egill; Larsson, Henrik B.W.; Born, Alfred P.

    2005-01-01

    possible structural changes as measured by diffusion weighted imaging. Eleven healthy sea-level residents were studied after 5 weeks of adaptation to high altitude conditions at Chacaltaya, Bolivia (5260 m). The subjects were studied immediately after return to sea-level in hypoxic and normoxic conditions...... was slightly elevated in high altitude as compared to sea-level adaptation. It is concluded that hypoxia significantly diminishes the BOLD response, and the mechanisms underlying this finding are discussed. Furthermore, altitude adaptation may influence both the magnitude of the activation-related response......, and the examinations repeated 6 months later after re-adaptation to sea-level conditions. The BOLD response, measured at 1.5 T, was severely reduced during acute hypoxia both in the altitude and sea-level adapted states (50% reduction during an average S(a)O(2) of 75%). On average, the BOLD response magnitude was 23...

  14. Explicit control of adaptive automation under different levels of environmental stress.

    Science.gov (United States)

    Sauer, Jürgen; Kao, Chung-Shan; Wastell, David; Nickel, Peter

    2011-08-01

    This article examines the effectiveness of three different forms of explicit control of adaptive automation under low- and high-stress conditions, operationalised by different levels of noise. In total, 60 participants were assigned to one of three types of automation design (free, prompted and forced choice). They were trained for 4 h on a highly automated simulation of a process control environment, called AutoCAMS. This was followed by a 4-h testing session under noise exposure and quiet conditions. Measures of performance, psychophysiology and subjective reactions were taken. The results showed that all three modes of explicit control of adaptive automation modes were able to attenuate the negative effects of noise. This was partly due to the fact that operators opted for higher levels of automation under noise. It also emerged that forced choice showed marginal advantages over the two other automation modes. Statement of Relevance: This work is relevant to the design of adaptive automation since it emphasises the need to consider the impact of work-related stressors during task completion. During the presence of stressors, different forms of operator support through automation may be required than under more favourable working conditions.

  15. Adaptation to Sea Level Rise: A Multidisciplinary Analysis for Ho Chi Minh City, Vietnam

    Science.gov (United States)

    Scussolini, Paolo; Tran, Thi Van Thu; Koks, Elco; Diaz-Loaiza, Andres; Ho, Phi Long; Lasage, Ralph

    2017-12-01

    One of the most critical impacts of sea level rise is that flooding suffered by ever larger settlements in tropical deltas will increase. Here we look at Ho Chi Minh City, Vietnam, and quantify the threats that coastal floods pose to safety and to the economy. For this, we produce flood maps through hydrodynamic modeling and, by combining these with data sets of exposure and vulnerability, we estimate two indicators of risk: the damage to assets and the number of potential casualties. We simulate current and future (2050 and 2100) flood risk using IPCC scenarios of sea level rise and socioeconomic change. We find that annual damage may grow by more than 1 order of magnitude, and potential casualties may grow 5-20-fold until the end of the century, in the absence of adaptation. Impacts depend strongly on the climate and socioeconomic scenarios considered. Next, we simulate the implementation of adaptation measures and calculate their effectiveness in reducing impacts. We find that a ring dike would protect the inner city but increase risk in more rural districts, whereas elevating areas at risk and dryproofing buildings will reduce impacts to the city as a whole. Most measures perform well from an economic standpoint. Combinations of measures seem to be the optimal solution and may address potential equity conflicts. Based on our results, we design possible adaptation pathways for Ho Chi Minh City for the coming decades; these can inform policy-making and strategic thinking.

  16. Adaptive Capacity Mapping of Semarang Offshore Territory by the Increasing of Water Level and Climate Change

    Directory of Open Access Journals (Sweden)

    Ifan Ridlo Suhelm

    2013-07-01

    Full Text Available Tidal inundation, flood and land subsidence are the problems faced by Semarang city related to climate change. Intergovernmental Panel on Climate Change (IPCC predicted the increase of sea level rise 18-59 cm during 1990-2100 while the temperature increase 0,6°C to 4°C during the same period. The Semarang coastal city was highly vulnerable to sea level rise and it increased with two factors, topography and land subsidence. The purpose of this study was to map the adaptive capacity of coastal areas in the face of the threat of disasters caused by climate change. The parameters used are Network Number, Employee based educational background, Source Main Livelihoods, Health Facilities, and Infrastructure Road. Adaptive capacity of regions classified into 3 (three classes, namely low, medium and high. The results of the study showed that most of the coastal area of Semarang have adaptive capacities ranging from low to moderate, while the village with low capacity totaling 58 villages (58.62% of the total coastal district in the city of Semarang.

  17. Present limits to heat-adaptability in corals and population-level responses to climate extremes.

    Directory of Open Access Journals (Sweden)

    Bernhard M Riegl

    Full Text Available Climate change scenarios suggest an increase in tropical ocean temperature by 1-3°C by 2099, potentially killing many coral reefs. But Arabian/Persian Gulf corals already exist in this future thermal environment predicted for most tropical reefs and survived severe bleaching in 2010, one of the hottest years on record. Exposure to 33-35°C was on average twice as long as in non-bleaching years. Gulf corals bleached after exposure to temperatures above 34°C for a total of 8 weeks of which 3 weeks were above 35°C. This is more heat than any other corals can survive, providing an insight into the present limits of holobiont adaptation. We show that average temperatures as well as heat-waves in the Gulf have been increasing, that coral population levels will fluctuate strongly, and reef-building capability will be compromised. This, in combination with ocean acidification and significant local threats posed by rampant coastal development puts even these most heat-adapted corals at risk. WWF considers the Gulf ecoregion as "critically endangered". We argue here that Gulf corals should be considered for assisted migration to the tropical Indo-Pacific. This would have the double benefit of avoiding local extinction of the world's most heat-adapted holobionts while at the same time introducing their genetic information to populations naïve to such extremes, potentially assisting their survival. Thus, the heat-adaptation acquired by Gulf corals over 6 k, could benefit tropical Indo-Pacific corals who have <100 y until they will experience a similarly harsh climate. Population models suggest that the heat-adapted corals could become dominant on tropical reefs within ∼20 years.

  18. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Tomoyuki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  19. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  20. Modification of the SAS4A Safety Analysis Code for Integration with the ADAPT Discrete Dynamic Event Tree Framework.

    Energy Technology Data Exchange (ETDEWEB)

    Jankovsky, Zachary Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through the analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.

  1. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  2. Adaptation and implementation of the TRACE code for transient analysis on designs of cooled lead fast reactors

    International Nuclear Information System (INIS)

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2014-01-01

    The article describes the changes implemented in the TRACE code to include thermodynamic tables of liquid lead drawn from experimental results. He then explains the process for developing a thermohydraulic model for the prototype ALFRED and analysis of a selection of representative transient conducted within the framework of international research projects. The study demonstrates the applicability of TRACE code to simulate designs of cooled lead fast reactors and exposes the high safety margins are there in this technology to accommodate the most severe transients identified in their security study. (Author)

  3. The LEONAR code: a new tool for PSA Level 2 analyses

    International Nuclear Information System (INIS)

    Tourniaire, B; Spindler, B.; Ratel, G.; Seiler, J.M.; Iooss, B.; Marques, M.; Gaudier, F.; Greffier, G.

    2011-01-01

    The LEONAR code, complementary to integral codes such as MAAP or ASTEC, is a new severe accident simulation tool which can calculate easily 1000 late phase reactor situations within a few hours and provide a statistical evaluation of the situations. LEONAR can be used for the analysis of the impact on the failure probabilities of specific Severe Accident Management measures (for instance: water injection) or design modifications (for instance: pressure vessel flooding or dedicated reactor pit flooding), or to focus the research effort on key phenomena. The starting conditions for LEONAR are a set of core melting situations that are separately calculated from a core degradation code (such as MAAP, which is used by EDF). LEONAR describes the core melt evolution after flooding in the core, the corium relocation in the lower head (under dry and wet conditions), the evolution of corium in the lower head including the effect of flooding, the vessel failure, corium relocation in the reactor cavity, interaction between corium and basemat concrete, possible corium spreading in the neighbour rooms, on the containment floor. Scenario events as well as specific physical model parameters are characterised by a probability density distribution. The probabilistic evaluation is performed by URANIE that is coupled to the physical calculations. The calculation results are treated in a statistical way in order to provide easily usable information. This tool can be used to identify the main parameters that influence corium coolability for severe accident late phases. It is aimed to replace efficiently PIRT exercises. An important impact of such a tool is that it can be used to make a demonstration that the probability of basemat failure can be significantly reduced by coupling a number of separate severe accident management measures or design modifications despite each separate measure is not sufficient by itself to avoid the failure. (authors)

  4. Shielding analysis of high level waste water storage facilities using MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Yabuta, Naohiro [Mitsubishi Research Inst., Inc., Tokyo (Japan)

    2001-01-01

    The neutron and gamma-ray transport analysis for the facility as a reprocessing facility with large buildings having thick shielding was made. Radiation shielding analysis consists of a deep transmission calculation for the concrete wall and a skyshine calculation for the space out of the buildings. An efficient analysis with a short running time and high accuracy needs a variance reduction technique suitable for all the calculation regions and structures. In this report, the shielding analysis using MCNP and a discrete ordinate transport code is explained and the idea and procedure of decision of variance reduction parameter is completed. (J.P.N.)

  5. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  6. Adaptation of penelope Monte Carlo code system to the absorbed dose metrology: characterization of high energy photon beams and calculations of reference dosimeter correction factors

    International Nuclear Information System (INIS)

    Mazurier, J.

    1999-01-01

    This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)

  7. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A. B. A.

    2017-07-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future global and regional SLR, corresponding uncertainties in contemporary ESL have not been assessed and projections are limited. Here we quantify, for the first time at global scale, the uncertainties in present-day ESL estimates, which have by default been ignored in broad-scale sea-level rise impact assessments to date. ESL uncertainties exceed those from global SLR projections and, assuming that we meet the Paris agreement goals, the projected SLR itself by the end of the century in many regions. Both uncertainties in SLR projections and ESL estimates need to be understood and combined to fully assess potential impacts and adaptation needs.

  8. An adaptive neuro-fuzzy controller for mold level control in continuous casting

    International Nuclear Information System (INIS)

    Zolghadri Jahromi, M.; Abolhassan Tash, F.

    2001-01-01

    Mold variations in continuous casting are believed to be the main cause of surface defects in the final product. Although a Pid controller is well capable of controlling the level under normal conditions, it cannot prevent large variations of mold level when a disturbance occurs in the form of nozzle unclogging. In this paper, dual controller architecture is presented, a Pid controller is used as the main controller of the plant and an adaptive neuro-fuzzy controller is used as an auxiliary controller to help the Pid during disturbed phases. The control is passed back to the Pid controller after the disturbance is being dealt with. Simulation results prove the effectiveness of this control strategy in reducing mold level variations during the unclogging period

  9. PRESTO-II: a low-level waste environmental transport and risk assessment code

    Energy Technology Data Exchange (ETDEWEB)

    Fields, D.E.; Emerson, C.J.; Chester, R.O.; Little, C.A.; Hiromoto, G.

    1986-04-01

    PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed for the evaluation of possible health effects from shallow-land and, waste-disposal trenches. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include ground-water transport, overland flow, erosion, surface water dilution, suspension, atmospheric transport, deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses, as well as doses to the intruder and farmer, may be calculated. Cumulative health effects in terms of cancer deaths are calculated for the population over the 1000-year period using a life-table approach. Data are included for three example sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York. A code listing and example input for each of the three sites are included in the appendices to this report.

  10. PRESTO-II: a low-level waste environmental transport and risk assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Emerson, C.J.; Chester, R.O.; Little, C.A.; Hiromoto, G.

    1986-04-01

    PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed for the evaluation of possible health effects from shallow-land and, waste-disposal trenches. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include ground-water transport, overland flow, erosion, surface water dilution, suspension, atmospheric transport, deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses, as well as doses to the intruder and farmer, may be calculated. Cumulative health effects in terms of cancer deaths are calculated for the population over the 1000-year period using a life-table approach. Data are included for three example sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York. A code listing and example input for each of the three sites are included in the appendices to this report

  11. Systems-level Proteomics of Two Ubiquitous Leaf Commensals Reveals Complementary Adaptive Traits for Phyllosphere Colonization.

    Science.gov (United States)

    Müller, Daniel B; Schubert, Olga T; Röst, Hannes; Aebersold, Ruedi; Vorholt, Julia A

    2016-10-01

    Plants are colonized by a diverse community of microorganisms, the plant microbiota, exhibiting a defined and conserved taxonomic structure. Niche separation based on spatial segregation and complementary adaptation strategies likely forms the basis for coexistence of the various microorganisms in the plant environment. To gain insights into organism-specific adaptations on a molecular level, we selected two exemplary community members of the core leaf microbiota and profiled their proteomes upon Arabidopsis phyllosphere colonization. The highly quantitative mass spectrometric technique SWATH MS was used and allowed for the analysis of over two thousand proteins spanning more than three orders of magnitude in abundance for each of the model strains. The data suggest that Sphingomonas melonis utilizes amino acids and hydrocarbon compounds during colonization of leaves whereas Methylobacterium extorquens relies on methanol metabolism in addition to oxalate metabolism, aerobic anoxygenic photosynthesis and alkanesulfonate utilization. Comparative genomic analyses indicates that utilization of oxalate and alkanesulfonates is widespread among leaf microbiota members whereas, aerobic anoxygenic photosynthesis is almost exclusively found in Methylobacteria. Despite the apparent niche separation between these two strains we also found a relatively small subset of proteins to be coregulated, indicating common mechanisms, underlying successful leaf colonization. Overall, our results reveal for two ubiquitous phyllosphere commensals species-specific adaptations to the host environment and provide evidence for niche separation within the plant microbiota. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  12. Systems-level Proteomics of Two Ubiquitous Leaf Commensals Reveals Complementary Adaptive Traits for Phyllosphere Colonization*

    Science.gov (United States)

    Müller, Daniel B.; Schubert, Olga T.; Röst, Hannes; Aebersold, Ruedi; Vorholt, Julia A.

    2016-01-01

    Plants are colonized by a diverse community of microorganisms, the plant microbiota, exhibiting a defined and conserved taxonomic structure. Niche separation based on spatial segregation and complementary adaptation strategies likely forms the basis for coexistence of the various microorganisms in the plant environment. To gain insights into organism-specific adaptations on a molecular level, we selected two exemplary community members of the core leaf microbiota and profiled their proteomes upon Arabidopsis phyllosphere colonization. The highly quantitative mass spectrometric technique SWATH MS was used and allowed for the analysis of over two thousand proteins spanning more than three orders of magnitude in abundance for each of the model strains. The data suggest that Sphingomonas melonis utilizes amino acids and hydrocarbon compounds during colonization of leaves whereas Methylobacterium extorquens relies on methanol metabolism in addition to oxalate metabolism, aerobic anoxygenic photosynthesis and alkanesulfonate utilization. Comparative genomic analyses indicates that utilization of oxalate and alkanesulfonates is widespread among leaf microbiota members whereas, aerobic anoxygenic photosynthesis is almost exclusively found in Methylobacteria. Despite the apparent niche separation between these two strains we also found a relatively small subset of proteins to be coregulated, indicating common mechanisms, underlying successful leaf colonization. Overall, our results reveal for two ubiquitous phyllosphere commensals species-specific adaptations to the host environment and provide evidence for niche separation within the plant microbiota. PMID:27457762

  13. Regional Interdependence in Adaptation to Sea Level Rise and Coastal Flooding

    Science.gov (United States)

    Stacey, M. T.; Lubell, M.; Hummel, M.; Wang, R. Q.; Barnard, P.; Erikson, L. H.; Herdman, L.; Pozdnukhov, A.; Sheehan, M.

    2017-12-01

    Projections of sea level rise may differ in the pace of change, but there is clear consensus that coastal communities will be facing more frequent and severe flooding events in the coming century. As communities adapt to future conditions, infrastructure systems will be developed, modified and abandoned, with important consequences for services and resilience. Whether action or inaction is pursued, the decisions made by an individual community regarding a single infrastructure system have implications that extend spatially and temporally due to geographic and infrastructure system interactions. At the same time, there are a number of barriers to collective or coordinated action that inhibit regional solutions. This interplay between local actions and regional responses is one of the great challenges facing decision-makers grappling with both local and regional climate-change adaptation. In this talk, I present case studies of the San Francisco Bay Area that examine how shoreline infrastructure, transporation sytems and decision-making networks interact to define the regional response to local actions and the local response to regional actions. I will characterize the barriers that exist to regional solutions, and characterize three types of interdependence that may motivate decision-makers to overcome those barriers. Using these examples, I will discuss the importance of interdisciplinary analyses that integrate the natural sciences, engineering and the social science to climate change adaptation more generally.

  14. Climate Change Adaptation Tools at the Community Level: An Integrated Literature Review

    Directory of Open Access Journals (Sweden)

    Elvis Modikela Nkoana

    2018-03-01

    Full Text Available The negative impacts of climate change are experienced at the global, regional and local levels. However, rural communities in sub-Saharan Africa face additional socio-political, cultural and economic challenges in addition to climate change. Decision support tools have been developed and applied to assist rural communities to cope with and adapt to climate change. However, poorly planned participatory processes and the lack of context-specific approaches in these tools are obstacles when aiming at strengthening the resilience of these rural communities. This paper uses an integrated literature review to identify best practices for involving rural communities in climate change adaptation efforts through the application of context-specific and culturally-sensitive climate change adaptation tools. These best practices include the use of a livelihoods approach to engage communities; the explicit acknowledgement of the local cultural do’s and don’ts; the recognition of local champions appointed from within the local community; the identification and prioritisation of vulnerable stakeholders; and the implementation of a two-way climate change risk communication instead of a one-sided information sharing approach.

  15. Climate Change Adaptation Strategies and Farm-level Efficiency in Food Crop Production in Southwestern, Nigeria

    Directory of Open Access Journals (Sweden)

    Otitoju, MA.

    2014-01-01

    Full Text Available Food crop yields depend largely on prevailing climate conditions, especially in Africa, where rain-fed agriculture predominate. The extent to which climate impacts are felt depends principally on the adaptation measures used by farmers. This study focused on the effect of climate change adaptation strategies on farm-level technical efficiency. The study used primary data collected from 360 randomly selected farmers in Southwest Nigeria. Cobb-Douglass stochastic frontier production model was used to analyse the data. Multiple cropping, land fragmentation, multiple planting dates, mulching and cover cropping were the major climate change adaptation strategies employed by the farmers. While land fragmentation and multiple planting dates had significant positive relationships, years of climate change awareness and social capital had significant inverse relationships, with technical inefficiency. This may be because while land fragmentation may hinder farm mechanization, multiple planting dates may increase the monotonousness and drudgery of farming. On the other hand, social capital and climate change awareness could help ameliorate the effects of, particularly, land fragmentation through resource pooling. It is therefore recommended that the farmers be encouraged to form cooperative societies so as to leverage their resource status through collective efforts.

  16. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    Science.gov (United States)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  17. Analyzing extreme sea levels for broad-scale impact and adaptation studies

    Science.gov (United States)

    Wahl, T.; Haigh, I. D.; Nicholls, R. J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.

    2017-12-01

    Coastal impact and adaptation assessments require detailed knowledge on extreme sea levels (ESL), because increasing damage due to extreme events is one of the major consequences of sea-level rise (SLR) and climate change. Over the last few decades, substantial research efforts have been directed towards improved understanding of past and future SLR; different scenarios were developed with process-based or semi-empirical models and used for coastal impact studies at various temporal and spatial scales to guide coastal management and adaptation efforts. Uncertainties in future SLR are typically accounted for by analyzing the impacts associated with a range of scenarios and model ensembles. ESL distributions are then displaced vertically according to the SLR scenarios under the inherent assumption that we have perfect knowledge on the statistics of extremes. However, there is still a limited understanding of present-day ESL which is largely ignored in most impact and adaptation analyses. The two key uncertainties stem from: (1) numerical models that are used to generate long time series of storm surge water levels, and (2) statistical models used for determining present-day ESL exceedance probabilities. There is no universally accepted approach to obtain such values for broad-scale flood risk assessments and while substantial research has explored SLR uncertainties, we quantify, for the first time globally, key uncertainties in ESL estimates. We find that contemporary ESL uncertainties exceed those from SLR projections and, assuming that we meet the Paris agreement, the projected SLR itself by the end of the century. Our results highlight the necessity to further improve our understanding of uncertainties in ESL estimates through (1) continued improvement of numerical and statistical models to simulate and analyze coastal water levels and (2) exploit the rich observational database and continue data archeology to obtain longer time series and remove model bias

  18. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  19. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca, M.A.; Torres, L.A.; Cornejo, N.; Martin, G.

    2008-01-01

    Full text: MIRD formalism at voxel level has been suggested as an optional methodology to perform internal radiation dosimetry calculation during internal radiation therapy in Nuclear Medicine. Voxel S values for Y 90 , 131 I, 32 P, 99m Tc and 89 Sr have been published to different sizes. Currently, 188 Re has been proposed as a promising radionuclide for therapy due to its physical features and availability from generators. The main objective of this work was to estimate the voxel S values for 188 Re at cubical geometry using the MCNP-4C code for the simulations of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxel were estimated and reported for 188 Re and Y 90 . A comparison of voxel S values computed with the MCNP code and the data reported in MIRD Pamphlet 17 for 90 Y was performed in order to evaluate our results. (author)

  20. Motivations for Local Climate Adaptation in Dutch Municipalities: Climate Change Impacts and the Role of Local-Level Government

    NARCIS (Netherlands)

    van den Berg, Maya Marieke

    2009-01-01

    The local government level is considered to be crucial in preparing society for climate change impact. Yet little is known about why local authorities do or do not take action to adapt their community for climate change impacts. In order to implement effective adaptation policy, the motivations for

  1. From Algorithmic Black Boxes to Adaptive White Boxes: Declarative Decision-Theoretic Ethical Programs as Codes of Ethics

    OpenAIRE

    van Otterlo, Martijn

    2017-01-01

    Ethics of algorithms is an emerging topic in various disciplines such as social science, law, and philosophy, but also artificial intelligence (AI). The value alignment problem expresses the challenge of (machine) learning values that are, in some way, aligned with human requirements or values. In this paper I argue for looking at how humans have formalized and communicated values, in professional codes of ethics, and for exploring declarative decision-theoretic ethical programs (DDTEP) to fo...

  2. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    International Nuclear Information System (INIS)

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  3. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  4. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    International Nuclear Information System (INIS)

    Ganapol, Barry; Maldonodo, Ivan

    2014-01-01

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity

  5. Community Vitality: The Role of Community-Level Resilience Adaptation and Innovation in Sustainable Development

    Directory of Open Access Journals (Sweden)

    Lenore Newman

    2010-01-01

    Full Text Available Community level action towards sustainable development has emerged as a key scale of intervention in the effort to address our many serious environmental issues. This is hindered by the large-scale destruction of both urban neighbourhoods and rural villages in the second half of the twentieth century. Communities, whether they are small or large, hubs of experimentation or loci of traditional techniques and methods, can be said to have a level of community vitality that acts as a site of resilience, adaptation and innovation in the face of environmental challenges. This paper outlines how community vitality acts as a cornerstone of sustainable development and suggests some courses for future research. A meta-case analysis of thirty-five Canadian communities reveals the characteristics of community vitality emerging from sustainable development experiments and its relationship to resilience, applied specifically to community development.

  6. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  7. User instructions for levelized power generation cost codes using an IBM-type PC

    International Nuclear Information System (INIS)

    Coen, J.J.; Delene, J.G.

    1989-01-01

    Programs for the calculation of levelized power generation costs using an IBM or compatible PC are described. Cost calculations for nuclear plants and coal-fired plants include capital investment cost, operation and maintenance cost, fuel cycle cost, decommissioning cost, and total levelized power generation cost. 7 refs., 36 figs., 4 tabs

  8. Climate change adaptation under uncertainty in the developing world: A case study of sea level rise in Kiribati

    Science.gov (United States)

    Donner, S. D.; Webber, S.

    2011-12-01

    Climate change is expected to have the greatest impact in parts of the developing world. At the 2010 meeting of U.N. Framework Convention on Climate Change in Cancun, industrialized countries agreed in principle to provide US$100 billion per year by 2020 to assist the developing world respond to climate change. This "Green Climate Fund" is a critical step towards addressing the challenge of climate change. However, the policy and discourse on supporting adaptation in the developing world remains highly idealized. For example, the efficacy of "no regrets" adaptation efforts or "mainstreaming" adaptation into decision-making are rarely evaluated in the real world. In this presentation, I will discuss the gap between adaptation theory and practice using a multi-year case study of the cultural, social and scientific obstacles to adapting to sea level rise in the Pacific atoll nation of Kiribati. Our field research reveals how scientific and institutional uncertainty can limit international efforts to fund adaptation and lead to spiraling costs. Scientific uncertainty about hyper-local impacts of sea level rise, though irreducible, can at times limit decision-making about adaptation measures, contrary to the notion that "good" decision-making practices can incorporate scientific uncertainty. Efforts to improve institutional capacity must be done carefully, or they risk inadvertently slowing the implementation of adaptation measures and increasing the likelihood of "mal"-adaptation.

  9. Adaptation and perceptual norms

    Science.gov (United States)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  10. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  11. Adaptation to Sea Level Rise in Coastal Units of the National Park Service (Invited)

    Science.gov (United States)

    Beavers, R. L.

    2010-12-01

    National Park units with Natural, Cultural and Historic Resource-based data products and management documents that will aid the parks in better managing aspects of storm-preparedness and post-storm response and recovery. These results as well as specific efforts to address vulnerability of NPS facilities and natural and cultural resources to sea level rise will be discussed. NPS is also coordinating with NOAA to fill a new position for coastal adaptation and apply the information learned from research, vulnerability studies, and work with partners to develop adaptation strategies for coastal and ocean parks. To adapt to sea level rise, NPS will develop strong policies, guidance, and interpretive materials to help parks take actions that will increase the resilience of ocean and coastal park biological and geologic resources, reduce inappropriate stressors and greenhouse gas emissions in ocean and coastal parks, and educate the public about the need for comprehensive, swift and effective measures that will help the NPS conserve ocean and coastal park resources for future generations.

  12. Code Description for Generation of Meteorological Height and Pressure Level and Layer Profiles

    Science.gov (United States)

    2016-06-01

    defined by user input height or pressure levels. It can process input profiles from sensing systems such as radiosonde, lidar, or wind profiling radar...routine may be required for different input types and formats. meteorological sounding interpolation , integrated mean layer values, US Army Research...or other radiosonde soundings. There are 2 main versions or “methods” that produce output in height- or pressure-based profiles of interpolated level

  13. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  14. Environmental remediation of high-level nuclear waste in geological repository. Modified computer code creates ultimate benchmark in natural systems

    International Nuclear Information System (INIS)

    Peter, Geoffrey J.

    2011-01-01

    Isolation of high-level nuclear waste in permanent geological repositories has been a major concern for over 30 years due to the migration of dissolved radio nuclides reaching the water table (10,000-year compliance period) as water moves through the repository and the surrounding area. Repositories based on mathematical models allow for long-term geological phenomena and involve many approximations; however, experimental verification of long-term processes is impossible. Countries must determine if geological disposal is adequate for permanent storage. Many countries have extensively studied different aspects of safely confining the highly radioactive waste in an underground repository based on the unique geological composition at their selected repository location. This paper discusses two computer codes developed by various countries to study the coupled thermal, mechanical, and chemical process in these environments, and the migration of radionuclide. Further, this paper presents the results of a case study of the Magma-hydrothermal (MH) computer code, modified by the author, applied to nuclear waste repository analysis. The MH code verified by simulating natural systems thus, creating the ultimate benchmark. This approach based on processes similar to those expected near waste repositories currently occurring in natural systems. (author)

  15. State of Mechanisms of Adaptation to Teaching Loads for High-school Students with Different Levels of Professional Preparedness

    Directory of Open Access Journals (Sweden)

    G.N. Danilenko

    2013-04-01

    Full Text Available Evaluation of functional adaptability of 69 high-school students with different levels of professional preparedness had been carried out. The dynamics of the indices of heart rate variability and hemodynamics indices during the academic year had been studied. The difference in adaptive capacity, depending on the personal characteristics of students, the level of preparedness of adolescents to professional choice had been shown.

  16. Adaptation of copper community tolerance levels after biofilm transplantation in an urban river.

    Science.gov (United States)

    Fechner, Lise C; Versace, François; Gourlay-Francé, Catherine; Tusseau-Vuillemin, Marie-Hélène

    2012-01-15

    The Water Framework Directive requires the development of biological tools which can act as early-warning indicators of a sudden increase (accidental pollution) or decrease (recovery due to prevention) of the chemical status of aquatic systems. River biofilms, which respond quickly to modifications of environmental parameters and also play a key part in the functioning of aquatic ecosystems, are therefore good candidates to monitor an increase or a decrease of water pollution. In the present study, we investigated the biological response of biofilms transplanted either upstream (recovery) or downstream (deterioration of exposure levels) the urban area of Paris (France). Both modifications of Cu community tolerance levels and of global bacterial and eukaryotic community structure using automated ribosomal intergenic spacer analysis (ARISA) fingerprints were examined 15 and 30 days after the transplantation. Cu tolerance levels of the heterotrophic component of biofilms were assessed using a short-term toxicity test based on β-glucosidase (heterotrophic) activity. Cu tolerance increased for biofilms transplanted upstream to downstream Paris (5-fold increase on day 30) and conversely decreased for biofilms transplanted downstream to upstream (8-fold decrease on day 30). ARISA fingerprints revealed that bacterial and eukaryotic community structures of transplanted biofilms were closer to the structures of biofilms from the transplantation sites (or sites with similar contamination levels) than to biofilms from their sites of origin. Statistical analysis of the data confirmed that the key factor explaining biofilm Cu tolerance levels is the sampling site and not the site of origin. It also showed that Cu tolerance levels are related to the global urban contamination (both metals and nutrients). The study shows that biofilms adapt fast to modifications of their surroundings. In particular, community tolerance varies quickly and reflects the new exposure levels only 15

  17. Clearance of low levels of HCV viremia in the absence of a strong adaptive immune response

    Directory of Open Access Journals (Sweden)

    Manns Michael P

    2007-06-01

    Full Text Available Abstract Spontaneous clearance of hepatitis C virus (HCV has frequently been associated with the presence of HCV-specific cellular immunity. However, there had been also reports in chimpanzees demonstrating clearance of HCV-viremia in the absence of significant levels of detectable HCV-specific cellular immune responses. We here report seven asymptomatic acute hepatitis C cases with peak HCV-RNA levels between 300 and 100.000 copies/ml who all cleared HCV-RNA spontaneously. Patients were identified by a systematic screening of 1176 consecutive new incoming offenders in a German young offender institution. Four of the seven patients never developed anti-HCV antibodies and had normal ALT levels throughout follow-up. Transient weak HCV-specific CD4+ T cell responses were detectable in five individuals which did not differ in strength and breadth from age- and sex-matched patients with chronic hepatitis C and long-term recovered patients. In contrast, HCV-specific MHC-class-I-tetramer-positive cells were found in 3 of 4 HLA-A2-positive patients. Thus, these cases highlight that clearance of low levels of HCV viremia is possible in the absence of a strong adaptive immune response which might explain the low seroconversion rate after occupational exposure to HCV.

  18. Levels of processing and the coding of position cues in motor short-term memory.

    Science.gov (United States)

    Ho, L; Shea, J B

    1978-06-01

    The present study investigated the appropriateness of the levels-of-processing framework of memory for explaining retention of information in motor short-term memory. Subjects were given labels descriptive of the positions to be remembered by the experimenter (EL), were given no labels (NL), or provided their own labels (SL). A control group (CONT) was required to count backwards during the presentation of the criterion positions. The inclusion of a 30-sec filled retention interval as well as 0-sec and 30-sec unfilled retention intervals tested a prediction by Craik and Lockhart (1972), when attention is diverted from an item, information will be lost at a rate appropriate to its level of processing - that is, slower rates for deeper levels. Groups EL and SL had greater accuracy at recall for all three retention intervals than groups CONT and NL. In addition, there was no significant increase in error between 30-sec unfilled and 30-sec filled intervals for groups EL and SL, while there was a significant increase in error for groups CONT and NL. The data were interpreted in terms of Craik and Lockhart's (1972) levels-of-processing approach to memory.

  19. Underwater Image Enhancement by Adaptive Gray World and Differential Gray-Levels Histogram Equalization

    Directory of Open Access Journals (Sweden)

    WONG, S.-L.

    2018-05-01

    Full Text Available Most underwater images tend to be dominated by a single color cast. This paper presents a solution to remove the color cast and improve the contrast in underwater images. However, after the removal of the color cast using Gray World (GW method, the resultant image is not visually pleasing. Hence, we propose an integrated approach using Adaptive GW (AGW and Differential Gray-Levels Histogram Equalization (DHE that operate in parallel. The AGW is applied to remove the color cast while DHE is used to improve the contrast of the underwater image. The outputs of both chromaticity components of AGW and intensity components of DHE are combined to form the enhanced image. The results of the proposed method are compared with three existing methods using qualitative and quantitative measures. The proposed method increased the visibility of underwater images and in most cases produces better quantitative scores when compared to the three existing methods.

  20. Adaptation to climate change and climate variability in European agriculture: The importance of farm level responses

    NARCIS (Netherlands)

    Reidsma, P.; Ewert, F.; Oude Lansink, A.G.J.M.; Leemans, R.

    2010-01-01

    Climatic conditions and hence climate change influence agriculture. Most studies that addressed the vulnerability of agriculture to climate change have focused on potential impacts without considering adaptation. When adaptation strategies are considered, socio-economic conditions and farm

  1. Adaptation the Abaqus thermomechanics code to simulate 3D multipellet steady and transient WWER fuel rod behavior

    International Nuclear Information System (INIS)

    Kuznetsov, A.V.; Kuznetsov, V.I.; Krupkin, A.V.; Novikov, V.V.

    2015-01-01

    The study of Abaqus technology capabilities for modeling the behavior of the WWER-1000 fuel element for the campaign, taking into account the following features: multi-contact thermomechanical interaction of fuel pellet and fuel can, accounting for creep and swelling of fuel, consideration of creep of the can, setting the mechanisms of thermophysical and mechanical behavior of the fuel - cladding gap. The code was tested on the following developed finite element models: 3D fuel element model with five fuel pellets, 3D fuel element model with one fuel pellet and cleavage in the gap, 3D model of the fuel rod section with one randomly fragmented tablet. The position of the WWER-1000 fuel rod section in the middle of the core and the loads and material properties corresponding to this location were considered. The principal possibility of using Abaqus technology for solving fuel design problems is shown [ru

  2. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus.

    Science.gov (United States)

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-07-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Climate change and unequal phenological changes across four trophic levels: constraints or adaptations?

    Science.gov (United States)

    Both, Christiaan; van Asch, Margriet; Bijlsma, Rob G; van den Burg, Arnold B; Visser, Marcel E

    2009-01-01

    1. Climate change has been shown to affect the phenology of many organisms, but interestingly these shifts are often unequal across trophic levels, causing a mismatch between the phenology of organisms and their food. 2. We consider two alternative hypotheses: consumers are constrained to adjust sufficiently to the lower trophic level, or prey species react more strongly than their predators to reduce predation. We discuss both hypotheses with our analyses of changes in phenology across four trophic levels: tree budburst, peak biomass of herbivorous caterpillars, breeding phenology of four insectivorous bird species and an avian predator. 3. In our long-term study, we show that between 1988 and 2005, budburst advanced (not significantly) with 0.17 d yr(-1), while between 1985 and 2005 both caterpillars (0.75 d year(-1)) and the hatching date of the passerine species (range for four species: 0.36-0.50 d year(-1)) have advanced, whereas raptor hatching dates showed no trend. 4. The caterpillar peak date was closely correlated with budburst date, as were the passerine hatching dates with the peak caterpillar biomass date. In all these cases, however, the slopes were significantly less than unity, showing that the response of the consumers is weaker than that of their food. This was also true for the avian predator, for which hatching dates were not correlated with the peak availability of fledgling passerines. As a result, the match between food demand and availability deteriorated over time for both the passerines and the avian predators. 5. These results could equally well be explained by consumers' insufficient responses as a consequence of constraints in adapting to climate change, or by them trying to escape predation from a higher trophic level, or both. Selection on phenology could thus be both from matches of phenology with higher and lower levels, and quantifying these can shed new light on why some organisms do adjust their phenology to climate change, while

  4. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    Science.gov (United States)

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  5. Coastal wetland adaptation to sea level rise: Quantifying potential for landward migration and coastal squeeze

    Science.gov (United States)

    Borchert, Sinéad M.; Osland, Michael J.; Enwright, Nicholas M.; Griffith, Kereen

    2018-01-01

    development, there is not space for wetlands to move and adapt to sea level rise. Future‐focused landscape conservation plans that incorporate the protection of wetland migration corridors can increase the adaptive capacity of these valuable ecosystems and simultaneously decrease the vulnerability of coastal human communities to the harmful effects of rising seas.

  6. Adaptive neural network controller for the molten steel level control of strip casting processes

    International Nuclear Information System (INIS)

    Chen, Hung Yi; Huang, Shiuh Jer

    2010-01-01

    The twin-roll strip casting process is a steel-strip production method which combines continuous casting and hot rolling processes. The production line from molten liquid steel to the final steel-strip is shortened and the production cost is reduced significantly as compared to conventional continuous casting. The quality of strip casting process depends on many process parameters, such as molten steel level in the pool, solidification position, and roll gap. Their relationships are complex and the strip casting process has the properties of nonlinear uncertainty and time-varying characteristics. It is difficult to establish an accurate process model for designing a model-based controller to monitor the strip quality. In this paper, a model-free adaptive neural network controller is developed to overcome this problem. The proposed control strategy is based on a neural network structure combined with a sliding-mode control scheme. An adaptive rule is employed to on-line adjust the weights of radial basis functions by using the reaching condition of a specified sliding surface. This surface has the on-line learning ability to respond to the system's nonlinear and time-varying behaviors. Since this model-free controller has a simple control structure and small number of control parameters, it is easy to implement. Simulation results, based on a semi experimental system dynamic model and parameters, are executed to show the control performance of the proposed intelligent controller. In addition, the control performance is compared with that of a traditional Pid controller

  7. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments

    Directory of Open Access Journals (Sweden)

    Saeed Daneshmand

    2016-10-01

    Full Text Available The use of antenna arrays in Global Navigation Satellite System (GNSS applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  8. PSAPACK 4.2. A code for probabilistic safety assessment level 1. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-03-01

    Only limited use has been made until now of the large amount of information contained in probabilistic safety assessments (PSAs). This is mainly due to the complexity of the PSA reports and the difficulties in obtaining intermediate results and in performing updates and recalculations. Moreover, PSA software was developed for mainframe computers, and the files of information such as fault trees and accident sequences were intended for the use of the analysts carrying out PSA studies or other skilled PSA practitioners. The increasing power and availability of personal computers (PCs) and developments in recent years in both hardware and software have made it possible to develop PSA software for use in PCs. Furthermore, the operational characteristics of PCs make them attractive not only for performing PSAs but also for updating the results and in using them in day-to-day applications. The IAEA has therefore developed in co-operation with its Member States, a software package (PSAPACK) for PCs for use in performing a Level 1 PSA and for easy interrogation of the results. Figs.

  9. PSAPACK 4.2. A code for probabilistic safety assessment level 1. User's manual

    International Nuclear Information System (INIS)

    1995-01-01

    Only limited use has been made until now of the large amount of information contained in probabilistic safety assessments (PSAs). This is mainly due to the complexity of the PSA reports and the difficulties in obtaining intermediate results and in performing updates and recalculations. Moreover, PSA software was developed for mainframe computers, and the files of information such as fault trees and accident sequences were intended for the use of the analysts carrying out PSA studies or other skilled PSA practitioners. The increasing power and availability of personal computers (PCs) and developments in recent years in both hardware and software have made it possible to develop PSA software for use in PCs. Furthermore, the operational characteristics of PCs make them attractive not only for performing PSAs but also for updating the results and in using them in day-to-day applications. The IAEA has therefore developed in co-operation with its Member States, a software package (PSAPACK) for PCs for use in performing a Level 1 PSA and for easy interrogation of the results. Figs

  10. Phylum-Level Conservation of Regulatory Information in Nematodes despite Extensive Non-coding Sequence Divergence

    Science.gov (United States)

    Gordon, Kacy L.; Arthur, Robert K.; Ruvinsky, Ilya

    2015-01-01

    Gene regulatory information guides development and shapes the course of evolution. To test conservation of gene regulation within the phylum Nematoda, we compared the functions of putative cis-regulatory sequences of four sets of orthologs (unc-47, unc-25, mec-3 and elt-2) from distantly-related nematode species. These species, Caenorhabditis elegans, its congeneric C. briggsae, and three parasitic species Meloidogyne hapla, Brugia malayi, and Trichinella spiralis, represent four of the five major clades in the phylum Nematoda. Despite the great phylogenetic distances sampled and the extensive sequence divergence of nematode genomes, all but one of the regulatory elements we tested are able to drive at least a subset of the expected gene expression patterns. We show that functionally conserved cis-regulatory elements have no more extended sequence similarity to their C. elegans orthologs than would be expected by chance, but they do harbor motifs that are important for proper expression of the C. elegans genes. These motifs are too short to be distinguished from the background level of sequence similarity, and while identical in sequence they are not conserved in orientation or position. Functional tests reveal that some of these motifs contribute to proper expression. Our results suggest that conserved regulatory circuitry can persist despite considerable turnover within cis elements. PMID:26020930

  11. Phylum-Level Conservation of Regulatory Information in Nematodes despite Extensive Non-coding Sequence Divergence.

    Directory of Open Access Journals (Sweden)

    Kacy L Gordon

    2015-05-01

    Full Text Available Gene regulatory information guides development and shapes the course of evolution. To test conservation of gene regulation within the phylum Nematoda, we compared the functions of putative cis-regulatory sequences of four sets of orthologs (unc-47, unc-25, mec-3 and elt-2 from distantly-related nematode species. These species, Caenorhabditis elegans, its congeneric C. briggsae, and three parasitic species Meloidogyne hapla, Brugia malayi, and Trichinella spiralis, represent four of the five major clades in the phylum Nematoda. Despite the great phylogenetic distances sampled and the extensive sequence divergence of nematode genomes, all but one of the regulatory elements we tested are able to drive at least a subset of the expected gene expression patterns. We show that functionally conserved cis-regulatory elements have no more extended sequence similarity to their C. elegans orthologs than would be expected by chance, but they do harbor motifs that are important for proper expression of the C. elegans genes. These motifs are too short to be distinguished from the background level of sequence similarity, and while identical in sequence they are not conserved in orientation or position. Functional tests reveal that some of these motifs contribute to proper expression. Our results suggest that conserved regulatory circuitry can persist despite considerable turnover within cis elements.

  12. LDPC coding for QKD at higher photon flux levels based on spatial entanglement of twin beams in PDC

    International Nuclear Information System (INIS)

    Daneshgaran, Fred; Mondin, Marina; Bari, Inam

    2014-01-01

    Twin beams generated by Parametric Down Conversion (PDC) exhibit quantum correlations that has been effectively used as a tool for many applications including calibration of single photon detectors. By now, detection of multi-mode spatial correlations is a mature field and in principle, only depends on the transmission and detection efficiency of the devices and the channel. In [2, 4, 5], the authors utilized their know-how on almost perfect selection of modes of pairwise correlated entangled beams and the optimization of the noise reduction to below the shot-noise level, for absolute calibration of Charge Coupled Device (CCD) cameras. The same basic principle is currently being considered by the same authors for possible use in Quantum Key Distribution (QKD) [3, 1]. The main advantage in such an approach would be the ability to work with much higher photon fluxes than that of a single photon regime that is theoretically required for discrete variable QKD applications (in practice, very weak laser pulses with mean photon count below one are used).The natural setup of quantization of CCD detection area and subsequent measurement of the correlation statistic needed to detect the presence of the eavesdropper Eve, leads to a QKD channel model that is a Discrete Memoryless Channel (DMC) with a number of inputs and outputs that can be more than two (i.e., the channel is a multi-level DMC). This paper investigates the use of Low Density Parity Check (LDPC) codes for information reconciliation on the effective parallel channels associated with the multi-level DMC. The performance of such codes are shown to be close to the theoretical limits.

  13. BPLOM: BPM Level-Oriented Methodology for Incremental Business Process Modeling and Code Generation on Mobile Platforms

    Directory of Open Access Journals (Sweden)

    Jaime Solis Martines

    2013-06-01

    Full Text Available The requirements engineering phase is the departure point for the development process of any kind of computer application, it determines the functionality needed in the working scenario of the program. Although this is a crucial point in application development, as incorrect requirement definition leads to costly error appearance in later stages of the development process, application domain experts’ implication remains minor. In order to correct this scenario, business process modeling notations were introduced to favor business expert implication in this phase, but notation complexity prevents this participation to reach its ideal state. Hence, we promote the definition of a level oriented business process methodology, which encourages the adaptation of the modeling notation to the modeling and technical knowledge shown by the expert. This approach reduces the complexity found by domain experts and enables them to model their processes completely with a level of technical detail directly proportional to their knowledge.

  14. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  15. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  16. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    Directory of Open Access Journals (Sweden)

    Claudia V Filomatori

    2017-03-01

    Full Text Available The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs, relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of

  17. Two-Level Adaptive Algebraic Multigrid for a Sequence of Problems with Slowly Varying Random Coefficients [Adaptive Algebraic Multigrid for Sequence of Problems with Slowly Varying Random Coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Kalchev, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ketelsen, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, P. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-07

    Our paper proposes an adaptive strategy for reusing a previously constructed coarse space by algebraic multigrid to construct a two-level solver for a problem with nearby characteristics. Furthermore, a main target application is the solution of the linear problems that appear throughout a sequence of Markov chain Monte Carlo simulations of subsurface flow with uncertain permeability field. We demonstrate the efficacy of the method with extensive set of numerical experiments.

  18. An Affine Combination of Adaptive Filters for Channels with Different Sparsity Levels

    Directory of Open Access Journals (Sweden)

    M. Butsenko

    2016-06-01

    Full Text Available In this paper we present an affine combination strategy for two adaptive filters. One filter is designed to handle sparse impulse responses and the other one performs better if impulse response is dispersive. Filter outputs are combined using an adaptive mixing parameter and the resulting output shows a better performance than each of the combining filters separately. We also demonstrate that affine combination results in faster convergence than a convex combination of two adaptive filters.

  19. [Urbanization mechanisms in bird species: population systems transformations or adaptations at the individual level?].

    Science.gov (United States)

    Fridman, V S; Eremkin, G S; Zakharova-Kubareva, N Iu

    2008-01-01

    The present research deals with urbanization of wild bird and mammal species. Forms and mechanisms of population steadiness in the urban landscape have been examined. The urbanization process turned out to be a directed change of the population system forming de novo in the urbolandscape leading to a sustainable organization peculiar for the particular environment. The population organization of different types in urbolandscape is found to provide its stability under conditions of directed and fast changes accompanied with instability and heterogenous structure of habitats. It is shown that the same type of population organization meets the corresponding demands among different species settling in the urban environment. Its features are "openness" and "flowage" of the groups, far order of settlement levels and other units of population system, constant movements of the individuals between the groups as a respond to the signals of urboenvironment significant changes. The "urban" variant of the population system organization turns out to be opposite to that of the same species in the non-urban habitats. After formation of the urban types by the species and successful developing of the town, the urban population becomes separated from the maternal local population and begins to exist independently in the urban landscape. The variety of adaptation aberrations in ecology, behavior, and mode of life of urban birds is the population system stability function in the urban landscape and is not a results of individual selection. It is shown that the urbanization process of the species goes firstly on the population level being the system structure transformation developed by the species towards the most stable state in the town (city) territory. Only after the appearance of stable urban population, the urban individuals show the rapid growth of different changes in ecology, behavior, mode of life that was traditionally described by naturalists as species adaptation to the

  20. Multi-level policies and adaptive social networks – a conceptual modeling study for maintaining a polycentric governance system

    Directory of Open Access Journals (Sweden)

    Jean-Denis Mathias

    2017-03-01

    Full Text Available Information and collaboration patterns embedded in social networks play key roles in multilevel and polycentric modes of governance. However, modeling the dynamics of such social networks in multilevel settings has been seldom addressed in the literature. Here we use an adaptive social network model to elaborate the interplay between a central and a local government in order to maintain a polycentric governance. More specifically, our analysis explores in what ways specific policy choices made by a central agent affect the features of an emerging social network composed of local organizations and local users. Using two types of stylized policies, adaptive co-management and adaptive one-level management, we focus on the benefits of multi-level adaptive cooperation for network management. Our analysis uses viability theory to explore and to quantify the ability of these policies to achieve specific network properties. Viability theory gives the family of policies that enables maintaining the polycentric governance unlike optimal control that gives a unique blueprint. We found that the viability of the policies can change dramatically depending on the goals and features of the social network. For some social networks, we also found a very large difference between the viability of the adaptive one-level management and adaptive co-management policies. However, results also show that adaptive co-management doesn’t always provide benefits. Hence, we argue that applying viability theory to governance networks can help policy design by analyzing the trade-off between the costs of adaptive co-management and the benefits associated with its ability to maintain desirable social network properties in a polycentric governance framework.

  1. Adaptation of penelope Monte Carlo code system to the absorbed dose metrology: characterization of high energy photon beams and calculations of reference dosimeter correction factors; Adaptation du code Monte Carlo penelope pour la metrologie de la dose absorbee: caracterisation des faisceaux de photons X de haute energie et calcul de facteurs de correction de dosimetres de reference

    Energy Technology Data Exchange (ETDEWEB)

    Mazurier, J

    1999-05-28

    This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)

  2. Using Direct Policy Search to Identify Robust Strategies in Adapting to Uncertain Sea Level Rise and Storm Surge

    Science.gov (United States)

    Garner, G. G.; Keller, K.

    2017-12-01

    Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.

  3. Exploring students’ adaptive reasoning skills and van Hiele levels of geometric thinking: a case study in geometry

    Science.gov (United States)

    Rizki, H. T. N.; Frentika, D.; Wijaya, A.

    2018-03-01

    This study aims to explore junior high school students’ adaptive reasoning and the Van Hiele level of geometric thinking. The present study was a quasi-experiment with the non-equivalent control group design. The participants of the study were 34 seventh graders and 35 eighth graders in the experiment classes and 34 seventh graders and 34 eighth graders in the control classes. The students in the experiment classes learned geometry under the circumstances of a Knisley mathematical learning. The data were analyzed quantitatively by using inferential statistics. The results of data analysis show an improvement of adaptive reasoning skills both in the grade seven and grade eight. An improvement was also found for the Van Hiele level of geometric thinking. These results indicate the positive impact of Knisley learning model on students’ adaptive reasoning skills and Van Hiele level of geometric thinking.

  4. Social Anxiety and Social Adaptation among Adolescents at Three Age Levels

    Science.gov (United States)

    Peleg, Ora

    2012-01-01

    The aim of the study was to examine the relationship between social anxiety and social adaptation among adolescents. This is the first study to research these parameters among three age groups: early, middle and late adolescence. On the whole, a negative relation was found between social anxiety and social adaptation. Specifically, for adolescents…

  5. Adapting the complexity level of a serious game to the proficiency of players

    NARCIS (Netherlands)

    van Oostendorp, H.; van der Spek, E.D.; Linssen, J.M.

    2014-01-01

    As games are continuously assessing the player, this assessment can be used to adapt the complexity of a game to the proficiency of the player in real time. We performed an experiment to examine the role of dynamic adaptation. In one condition, participants played a version of our serious game for

  6. Adapting the Complexity Level of a Serious Game to the Proficiency of Players

    NARCIS (Netherlands)

    van Oostendorp, Herre; van der Spek, Erik D; Linssen, Johannes Maria

    2014-01-01

    As games are continuously assessing the player, this assessment can be used to adapt the complexity of a game to the proficiency of the player in real time. We performed an experiment to examine the role of dynamic adaptation. In one condition, participants played a version of our serious game for

  7. Retinal adaptation to changing glycemic levels in a rat model of type 2 diabetes

    DEFF Research Database (Denmark)

    Johnson, Leif E; Larsen, Michael; Perez, Maria-Thereza

    2013-01-01

    PURPOSE: Glucose concentrations are elevated in retinal cells in undiagnosed and in undertreated diabetes. Studies of diabetic patients suggest that retinal function adapts, to some extent, to this increased supply of glucose. The aim of the present study was to examine such adaptation in a model...

  8. Adaptation to climate change at local level in Europe: An overview

    NARCIS (Netherlands)

    Aguiar, F.C.; Bentz, J.; Silva, J.M.N.; Fonseca, A.L.; Swart, R.J.; Santos, F.D.; Penha-Lopes, Gil

    2018-01-01

    Europe’s climate change vulnerability pushes for initiatives such as the European Adaptation Strategy and the associated Covenant of Mayors for Climate and Energy. What are the triggers and barriers, for which sectors and for which risks and how is adaptation funded? This paper examines 147 Local

  9. Resilience of Infrastructure Systems to Sea-Level Rise in Coastal Areas: Impacts, Adaptation Measures, and Implementation Challenges

    Directory of Open Access Journals (Sweden)

    Beatriz Azevedo de Almeida

    2016-11-01

    Full Text Available Expansive areas of low elevation in many densely populated coastal areas are at elevated risk of storm surges and flooding due to torrential precipitation, as a result of sea level rise. These phenomena could have catastrophic impacts on coastal communities and result in the destruction of critical infrastructure, disruption of economic activities and salt water contamination of the water supply. The objective of the study presented in this paper was to identify various impacts of sea level rise on civil infrastructures in coastal areas and examine the adaptation measures suggested in the existing literature. To this end, a systemic review of the existing literature was conducted in order to identify a repository of studies addressing sea level rise impacts and adaptation measures in the context of infrastructure systems. The study focused on three infrastructure sectors: water and wastewater, energy, and road transportation. The collected information was then analyzed in order to identify different categories of sea level rise impacts and corresponding adaptation measures. The findings of the study are threefold: (1 the major categories of sea level rise impacts on different infrastructure systems; (2 measures for protection, accommodation, and retreat in response to sea level rise impacts; and (3 challenges related to implementing adaptation measures.

  10. Enhanced attention amplifies face adaptation.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Nitrogen Metabolism in Adaptation of Photosynthesis to Water Stress in Rice Grown under Different Nitrogen Levels

    Directory of Open Access Journals (Sweden)

    Chu Zhong

    2017-06-01

    Full Text Available To investigate the role of nitrogen (N metabolism in the adaptation of photosynthesis to water stress in rice, a hydroponic experiment supplying with low N (0.72 mM, moderate N (2.86 mM, and high N (7.15 mM followed by 150 g⋅L-1 PEG-6000 induced water stress was conducted in a rainout shelter. Water stress induced stomatal limitation to photosynthesis at low N, but no significant effect was observed at moderate and high N. Non-photochemical quenching was higher at moderate and high N. In contrast, relative excessive energy at PSII level (EXC was declined with increasing N level. Malondialdehyde and hydrogen peroxide (H2O2 contents were in parallel with EXC. Water stress decreased catalase and ascorbate peroxidase activities at low N, resulting in increased H2O2 content and severer membrane lipid peroxidation; whereas the activities of antioxidative enzymes were increased at high N. In accordance with photosynthetic rate and antioxidative enzymes, water stress decreased the activities of key enzymes involving in N metabolism such as glutamate synthase and glutamate dehydrogenase, and photorespiratory key enzyme glycolate oxidase at low N. Concurrently, water stress increased nitrate content significantly at low N, but decreased nitrate content at moderate and high N. Contrary to nitrate, water stress increased proline content at moderate and high N. Our results suggest that N metabolism appears to be associated with the tolerance of photosynthesis to water stress in rice via affecting CO2 diffusion, antioxidant capacity, and osmotic adjustment.

  12. The effects of country-level population policy for enhancing adaptation to climate change

    Science.gov (United States)

    Gunasekara, N. K.; Kazama, S.; Yamazaki, D.; Oki, T.

    2013-11-01

    The effectiveness of population policy in reducing the combined impacts of population change and climate change on water resources is explored. One no-policy scenario and two scenarios with population policy assumptions are employed in combination with water availability under the SRES scenarios A1b, B1 and A2 for the impact analysis. The population data used are from the World Bank. The river discharges per grid of horizontal resolution 0.5° are obtained from the Total Runoff Integrating Pathways (TRIP) of the University of Tokyo, Japan. Unlike the population scenarios utilized in the SRES emission scenarios and the newest representative concentration pathways, the scenarios employed in this research are based, even after 2050, on country-level rather than regional-level growth assumptions. Our analysis implies that the heterogeneous pattern of population changes across the world is the dominant driver of water stress, irrespective of future greenhouse gas emissions, with highest impacts occurring in the already water-stressed low latitudes. In 2100, Africa, Middle East and parts of Asia are under extreme water stress under all scenarios. The sensitivity analysis reveals that a small reduction in populations over the region could relieve a large number of people from high water stress, while a further increase in population from the assumed levels (SC1) might not increase the number of people under high water stress considerably. Most of the population increase towards 2100 occurs in the already water-stressed lower latitudes. Therefore, population reduction policies are recommended for this region as a method of adaptation to the future water stress conditions. Population reduction policies will facilitate more control over their future development pathways, even if these countries were not able to contribute significantly to greenhouse gas (GHG) emission cuts due to economic constraints. However, for the European region, the population living in water

  13. Information preserving coding for multispectral data

    Science.gov (United States)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  14. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  15. Evaluating geographic imputation approaches for zip code level data: an application to a study of pediatric diabetes

    Directory of Open Access Journals (Sweden)

    Puett Robin C

    2009-10-01

    Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies

  16. Cytogenetic adaptive response of mouse bone marrow cells to low level HTO

    International Nuclear Information System (INIS)

    Chen Deqing; Zhang Zhaoyang; Zhou Xiangyan

    1993-01-01

    Mice were abdominally injected with a adaptive dose of 3.7 x 10 2 -3.7 x 10 5 Bq/gbw HTO, and then exposed to a challenge dose of 1.5 Gy of 6 '0Co γ-rays. In bone marrow cells that received both the adaptive and challenge doses, the chromatid breaks are lower than expected on the basis of additivity of the effects of the individual treatment. The adaptive response induced with 3.7 x 10 3 Bq/gbw HTO is the most remarkable, but at 3.7 x 10 5 Bq/gbw the adaptive response seems to disappear. The adaptive response can be observed by exposing to 1.5 Gy γ-rays from 1 to 5 days after injection of 3.7 x 10 3 Bq/gbw HTO, which is the most obvious one to reduce chromatid breaks to 50% of expected at the 5th day, but at the 7th day to equate to expected. The frequency of chromatid breaks is gradually reduced with time after challenge dose, the maximum index number of adaptive response is 0.50 and appears at 24 hr after challenge dose

  17. Blood oxygenation level dependent signal and neuronal adaptation to optogenetic and sensory stimulation in somatosensory cortex in awake animals.

    Science.gov (United States)

    Aksenov, Daniil P; Li, Limin; Miller, Michael J; Wyrwicz, Alice M

    2016-11-01

    The adaptation of neuronal responses to stimulation, in which a peak transient response is followed by a sustained plateau, has been well-studied. The blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) signal has also been shown to exhibit adaptation on a longer time scale. However, some regions such as the visual and auditory cortices exhibit significant BOLD adaptation, whereas other such as the whisker barrel cortex may not adapt. In the sensory cortex a combination of thalamic inputs and intracortical activity drives hemodynamic changes, although the relative contributions of these components are not entirely understood. The aim of this study is to assess the role of thalamic inputs vs. intracortical processing in shaping BOLD adaptation during stimulation in the somatosensory cortex. Using simultaneous fMRI and electrophysiology in awake rabbits, we measured BOLD, local field potentials (LFPs), single- and multi-unit activity in the cortex during whisker and optogenetic stimulation. This design allowed us to compare BOLD and haemodynamic responses during activation of the normal thalamocortical sensory pathway (i.e., both inputs and intracortical activity) vs. the direct optical activation of intracortical circuitry alone. Our findings show that whereas LFP and multi-unit (MUA) responses adapted, neither optogenetic nor sensory stimulation produced significant BOLD adaptation. We observed for both paradigms a variety of excitatory and inhibitory single unit responses. We conclude that sensory feed-forward thalamic inputs are not primarily responsible for shaping BOLD adaptation to stimuli; but the single-unit results point to a role in this behaviour for specific excitatory and inhibitory neuronal sub-populations, which may not correlate with aggregate neuronal activity. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Interest Level in 2-Year-Olds with Autism Spectrum Disorder Predicts Rate of Verbal, Nonverbal, and Adaptive Skill Acquisition

    Science.gov (United States)

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2?years predicts subsequent rates of verbal, nonverbal, and adaptive skill…

  19. Creation of nuclear power stations for export. Adapting a reference power station from the technical level of a national programme

    International Nuclear Information System (INIS)

    Marcaillou, J.; Haond, H.; Py, J.P.

    1977-01-01

    consideration are codes and norms, and the participation of local industry. In the various adaptation studies, a fundamental line of direction is sought whose goal is to modify as little as possible the reference project. The client will thus obtain the benefits of series production in the quality of the studies and in the manufacture of the materials and installations. (author)

  20. Examining the short term effects of emotion under an Adaptation Level Theory model of tinnitus perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2017-03-01

    Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be

  1. Interest level in 2-year-olds with autism spectrum disorder predicts rate of verbal, nonverbal, and adaptive skill acquisition

    OpenAIRE

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2014-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2 years predicts subsequent rates of verbal, nonverbal, and adaptive skill acquisition to the age of 3 years. A total of 70 toddlers with autism spectrum disorder, mean age of 21.9 months, were scored using Interest Level Sco...

  2. Improved Transient Performance of a Fuzzy Modified Model Reference Adaptive Controller for an Interacting Coupled Tank System Using Real-Coded Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Asan Mohideen Khansadurai

    2014-01-01

    Full Text Available The main objective of the paper is to design a model reference adaptive controller (MRAC with improved transient performance. A modification to the standard direct MRAC called fuzzy modified MRAC (FMRAC is used in the paper. The FMRAC uses a proportional control based Mamdani-type fuzzy logic controller (MFLC to improve the transient performance of a direct MRAC. The paper proposes the application of real-coded genetic algorithm (RGA to tune the membership function parameters of the proposed FMRAC offline so that the transient performance of the FMRAC is improved further. In this study, a GA based modified MRAC (GAMMRAC, an FMRAC, and a GA based FMRAC (GAFMRAC are designed for a coupled tank setup in a hybrid tank process and their transient performances are compared. The results show that the proposed GAFMRAC gives a better transient performance than the GAMMRAC or the FMRAC. It is concluded that the proposed controller can be used to obtain very good transient performance for the control of nonlinear processes.

  3. ertCPN: The adaptations of the coloured Petri-Net theory for real-time embedded system modeling and automatic code generation

    Directory of Open Access Journals (Sweden)

    Wattanapong Kurdthongmee

    2003-05-01

    Full Text Available A real-time system is a computer system that monitors or controls an external environment. The system must meet various timing and other constraints that are imposed on it by the real-time behaviour of the external world. One of the differences between a real-time and a conventional software is that a real-time program must be both logically and temporally correct. To successfully design and implement a real-time system, some analysis is typically done to assure that requirements or designs are consistent and that they satisfy certain desirable properties that may not be immediately obvious from specification. Executable specifications, prototypes and simulation are particularly useful in real-time systems for debugging specifications. In this paper, we propose the adaptations to the coloured Petri-net theory to ease the modeling, simulation and code generation process of an embedded, microcontroller-based, real-time system. The benefits of the proposed approach are demonstrated by use of our prototype software tool called ENVisAge (an Extended Coloured Petri-Net Based Visual Application Generator Tool.

  4. HIV-1 Adaptation to Antigen Processing Results in Population-Level Immune Evasion and Affects Subtype Diversification

    DEFF Research Database (Denmark)

    Tenzer, Stefan; Crawford, Hayley; Pymm, Phillip

    2014-01-01

    these regions encode epitopes presented by ~30 more common HLA variants. By combining epitope processing and computational analyses of the two HIV subtypes responsible for ~60% of worldwide infections, we identified a hitherto unrecognized adaptation to the antigen-processing machinery through substitutions...... of intrapatient adaptations, is predictable, facilitates viral subtype diversification, and increases global HIV diversity. Because low epitope abundance is associated with infrequent and weak T cell responses, this most likely results in both population-level immune evasion and inadequate responses in most...

  5. Determinants of farmers’ adaptation to climate change: A micro level analysis in Ghana

    Directory of Open Access Journals (Sweden)

    Francis Ndamani

    2016-06-01

    Full Text Available ABSTRACT This study analyzed socio-economic factors that influence farmers’ adaptation to climate change in agriculture. Perceptions regarding long-term changes in climate variables and the rate of occurrence of weather extremes were also investigated. Additionally, farmers’ perceived barriers to the use of adaptation practices were identified and ranked. A total of 100 farm-households were randomly selected from four communities in the Lawra district of Ghana and data were collected through semi-structured questionnaires, focused group discussions and field observations. A logistic regression model and weighted average index were used to analyze the data. The results showed that 87 % of respondents perceived a decrease in rainfall amount, while 82 % perceived an increase in temperature over the past 10 years. Results of the weighted average index indicate that dry spell and drought have a higher annual rate of occurrence than flood. Empirical results of the logistic regression model showed that education, household size, annual household income, access to information, credit and membership of farmer-based organization are the most important factors that influence farmers’ adaptation to climate change. The main constraints on adaptation include unpredictability of weather, high farm input cost, lack of access to timely weather information and water resources. The policy implication of this study is that governments should mainstream barriers to, and choice factors of, adaptation practices to climate change related projects and programs.

  6. Adaptive local refinement and multi-level methods for simulating multiphasic flows

    International Nuclear Information System (INIS)

    Minjeaud, Sebastian

    2010-01-01

    This thesis describes some numerical and mathematical aspects of incompressible multiphase flows simulations with a diffuse interface Cahn-Hilliard / Navier-Stokes model (interfaces have a small but a positive thickness). The space discretization is performed thanks to a Galerkin formulation and the finite elements method. The presence of different scales in the system (interfaces have a very small thickness compared to the characteristic lengths of the domain) suggests the use of a local adaptive refinement method. The algorithm that is introduced allows to implicitly handle the non-conformities of the generated meshes to produce conformal finite elements approximation spaces. It consists in refining basis functions instead of cells. The refinement of a basis function is made possible by the conceptual existence of a nested sequence of uniformly refined grids from which 'parent-child' relationships are deduced, linking the basis functions of two consecutive refinement levels. Moreover, it is shown how this method can be exploited to build multigrid pre-conditioners. From a composite finite elements approximation space, it is indeed possible to rebuild, by 'coarsening', a sequence of auxiliary nested spaces which allows to enter in the abstract multigrid framework. Concerning the time discretization, it begins with the study of the Cahn-Hilliard system. A semi-implicit scheme is proposed to remedy to convergence failures of the Newton method used to solve this (non linear) system. It guarantees the decrease of the discrete free energy ensuring the stability of the scheme. The existence and convergence of discrete solutions towards the weak solution of the system are shown. The study continues with providing an unconditionally stable time discretization of the complete Cahn-Hilliard / Navier-Stokes model. An important point is that this discretization does not strongly couple the Cahn-Hilliard and Navier-Stokes systems allowing to independently solve the two systems

  7. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  8. Validation and application of help code used for design and review of cover of low and intermediate level radioactive waste disposal in near-surface facilities

    International Nuclear Information System (INIS)

    Fan Zhiwen; Gu Cunli; Zhang Jinsheng; Liu Xiuzhen

    1996-01-01

    The authors describes validation and application of HELP code used by the United States Environmental Protective Agency for design and review of cover of low and intermediate level radioactive waste disposal in near-surface facilities. The HELP code was validated using data of field aerated moisture movement test by China Institute for Radiation Protection. The results show that simulation of HELP code is reasonable. Effects of surface layer thickness and surface treatment on moisture distribution in a cover was simulated with HELP code in the conditions of south-west China. The simulation results demonstrated that surface plantation of a cover plays very important role in moisture distribution in the cover. Special attention should be paid in cover design. In humid area, radioactive waste disposal safety should take full consideration with functions of chemical barrier. It was recommended that engineering economy should be added in future cover research so as to achieve optimization of cover design

  9. Adaptation level as the basic health status characteristics: possibilitics of its assessment and forecasting of desadaptation violations

    Directory of Open Access Journals (Sweden)

    Vysochyna I.L.

    2015-09-01

    Full Text Available On the basis of comprehensive survey with integrative assessment of health state (medical history data, physical examination, anthropometry, battery of psychological tests (Eysenck, Shmishek’s Personality Inventory (teen version, tapping - test by E.P. Ilyin, children's questionnaire of neuroses; test for rapid assessment of health, activity and mood, anxiety diagnosis by Spielberg - Khanin; Luscher test, color relations test level of adaptation was defined in 236 children from orphanages aged from 4 to 18 years. The manifestations of maladjustment were registered both on psychological level (neuroticism, high anxiety, decreased performance, activity and psychological endurance, sleep disturbance, presence of accentuation and neurotic disorders and somatic level (recurrent acute respiratory infections, poor physical development, exacerbation of chronic foci of infection and burdened biological history; this summarizes conclusions on a low level of health status of children in orphanages. The author has developed mathematical models of adaptation assessment and prediction of desadaptation, which allowed to identify children at risk for the development of adaptation disorders and children with maladjustment; according to the level and severity of maladaptive disorders correction programs are designed.

  10. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    Science.gov (United States)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be

  11. Specification of a test problem for HYDROCOIN [Hydrologic Code Intercomparison] Level 3 Case 2: Sensitivity analysis for deep disposal in partially saturated, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.

    1987-08-01

    The international Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive waste repositories. Three principal activities in the HYDROCOIN Project are Level 1, verification and benchmarking of hydrologic codes; Level 2, validation of hydrologic models; and Level 3, sensitivity and uncertainty analyses of the models and codes. This report presents a test case defined for the HYDROCOIN Level 3 activity to explore the feasibility of applying various sensitivity-analysis methodologies to a highly nonlinear model of isothermal, partially saturated flow through fractured tuff, and to develop modeling approaches to implement the methodologies for sensitivity analysis. These analyses involve an idealized representation of a repository sited above the water table in a layered sequence of welded and nonwelded, fractured, volcanic tuffs. The analyses suggested here include one-dimensional, steady flow; one-dimensional, nonsteady flow; and two-dimensional, steady flow. Performance measures to be used to evaluate model sensitivities are also defined; the measures are related to regulatory criteria for containment of high-level radioactive waste. 14 refs., 5 figs., 4 tabs

  12. Adaptation to the Impacts of Sea Level Rise in the Nile Delta Coastal ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project aims to demonstrate the value of stakeholder participation in evaluating the trade-offs between adaptation options in the stretch between Ras El Bar and Gamasa on the northern coast of Egypt. Researchers will carry out environmental assessments, investigate the socioeconomic and institutional aspects of ...

  13. Assessing adaptive management options to cope with climate change at the farm level

    NARCIS (Netherlands)

    Rötter, R.P.; Lehtonen, H.; Kahiluoto, J.; Helin, J.; Palosuo, T.; Salo, T.; Pavlova, Y.; Wolf, J.; Carter, T.R.; Ewert, F.

    2010-01-01

    In recent years, considerable achievements have been made by several European climate research groups in gaining a better understanding of and in developing methodologies and tools for integrated, multiscale analyses of how to adapt agricultural systems to climate change. Efforts are under way to

  14. COREDAR: COmmunicating Risk of sea level rise and Engaging stakeholDers in framing community based Adaptation stRategies

    Science.gov (United States)

    Amsad Ibrahim Khan, S. K.; Chen, R. S.; de Sherbinin, A. M.; Andimuthu, R.; Kandasamy, P.

    2015-12-01

    Accelerated sea-level rise (SLR) is a major long term outcome of climate change leading to increased inundation of low-lying areas. Particularly, global cities that are located on or near the coasts are often situated in low lying areas and these locations put global cities at greater risk to SLR. Localized flooding will profoundly impact vulnerable communities located in high-risk urban areas. Building community resilience and adapting to SLR is increasingly a high priority for cities. On the other hand, Article 6 of the United Nations Framework Convention on Climate Change addresses the importance of climate change communication and engaging stakeholders in decision making process. Importantly, Community Based Adaptation (CBA) experiences emphasize that it is important to understand a community's unique perceptions of their adaptive capacities to identify useful solutions and that scientific and technical information on anticipated coastal climate impacts needs to be translated into a suitable language and format that allows people to be able to participate in adaptation planning. To address this challenge, this study has put forth three research questions from the lens of urban community engagement in SLR adaptation, (1) What, if any, community engagement in addressing SLR occurring in urban areas; (2) What information do communities need and how does it need to be communicated, in order to be better prepared and have a greater sense of agency? and (3) How can government agencies from city to federal levels facilitate community engagement and action?. To answer these questions this study has evolved a framework "COREDAR" (COmmunicating Risk of sea level rise and Engaging stakeholDers in framing community based Adaptation StRategies) to communicate and transfer complex climate data and information such as projected SLR under different scenarios of IPCC AR5, predicted impact of SLR, prioritizing vulnerability, etc. to concerned stakeholders and local communities

  15. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  16. Selection of a computer code for Hanford low-level waste engineered-system performance assessment. Revision 1

    International Nuclear Information System (INIS)

    McGrail, B.P.; Bacon, D.H.

    1998-02-01

    Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites

  17. Local adaptation at the transcriptome level in brown trout: Evidence from early life history temperature genomic reaction norms

    DEFF Research Database (Denmark)

    Meier, Kristian; Hansen, Michael Møller; Normandeau, Eric

    2014-01-01

    Local adaptation and its underlying molecular basis has long been a key focus in evolutionary biology. There has recently been increased interest in the evolutionary role of plasticity and the molecular mechanisms underlying local adaptation. Using transcriptome analysis, we assessed differences....... These included genes involved in immune- and stress response. We observed less plasticity in the resident as compared to the anadromous populations, possibly reflecting that the degree of environmental heterogeneity encountered by individuals throughout their life cycle will select for variable level...... of phenotypic plasticity at the transcriptome level. Our study demonstrates the usefulness of transcriptome approaches to identify genes with different temperature reaction norms. The responses observed suggest that populations may vary in their susceptibility to climate change....

  18. An accurate anisotropic adaptation method for solving the level set advection equation

    International Nuclear Information System (INIS)

    Bui, C.; Dapogny, C.; Frey, P.

    2012-01-01

    In the present paper, a mesh adaptation process for solving the advection equation on a fully unstructured computational mesh is introduced, with a particular interest in the case it implicitly describes an evolving surface. This process mainly relies on a numerical scheme based on the method of characteristics. However, low order, this scheme lends itself to a thorough analysis on the theoretical side. It gives rise to an anisotropic error estimate which enjoys a very natural interpretation in terms of the Hausdorff distance between the exact and approximated surfaces. The computational mesh is then adapted according to the metric supplied by this estimate. The whole process enjoys a good accuracy as far as the interface resolution is concerned. Some numerical features are discussed and several classical examples are presented and commented in two or three dimensions. (authors)

  19. Analysis and Design of Adaptive OCDMA Passive Optical Networks

    Science.gov (United States)

    Hadi, Mohammad; Pakravan, Mohammad Reza

    2017-07-01

    OCDMA systems can support multiple classes of service by differentiating code parameters, power level and diversity order. In this paper, we analyze BER performance of a multi-class 1D/2D OCDMA system and propose a new approximation method that can be used to generate accurate estimation of system BER using a simple mathematical form. The proposed approximation provides insight into proper system level analysis, system level design and sensitivity of system performance to the factors such as code parameters, power level and diversity order. Considering code design, code cardinality and system performance constraints, two design problems are defined and their optimal solutions are provided. We then propose an adaptive OCDMA-PON that adaptively shares unused resources of inactive users among active ones to improve upstream system performance. Using the approximated BER expression and defined design problems, two adaptive code allocation algorithms for the adaptive OCDMA-PON are presented and their performances are evaluated by simulation. Simulation results show that the adaptive code allocation algorithms can increase average transmission rate or decrease average optical power consumption of ONUs for dynamic traffic patterns. According to the simulation results, for an adaptive OCDMA-PON with BER value of 1e-7 and user activity probability of 0.5, transmission rate (optical power consumption) can be increased (decreased) by a factor of 2.25 (0.27) compared to fixed code assignment.

  20. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  1. Bi-level positive pressure ventilation and adaptive servo ventilation in patients with heart failure and Cheyne-Stokes respiration.

    Science.gov (United States)

    Fietze, Ingo; Blau, Alexander; Glos, Martin; Theres, Heinz; Baumann, Gert; Penzel, Thomas

    2008-08-01

    Nocturnal positive pressure ventilation (PPV) has been shown to be effective in patients with impaired left ventricular ejection fraction (LVEF) and Cheyne-Stokes respiration (CSR). We investigated the effect of a bi-level PPV and adaptive servo ventilation on LVEF, CSR, and quantitative sleep quality. Thirty-seven patients (New York heart association [NYHA] II-III) with LVEFCSR were investigated by electrocardiography (ECG), echocardiography and polysomnography. The CSR index (CSRI) was 32.3+/-16.2/h. Patients were randomly treated with bi-level PPV using the standard spontaneous/timed (S/T) mode or with adaptive servo ventilation mode (AutoSetCS). After 6 weeks, 30 patients underwent control investigations with ECG, echocardiography, and polysomnography. The CSRI decreased significantly to 13.6+/-13.4/h. LVEF increased significantly after 6 weeks of ventilation (from 25.1+/-8.5 to 28.8+/-9.8%, plevel PPV and adaptive servo ventilation: the CSRI decreased more in the AutoSetCS group while the LVEF increased more in the bi-level PPV group. Administration of PPV can successfully attenuate CSA. Reduced CSA may be associated with improved LVEF; however, this may depend on the mode of PPV. Changed LVEF is evident even in the absence of significant changes in blood pressure.

  2. The Effect of Target Language and Code-Switching on the Grammatical Performance and Perceptions of Elementary-Level College French Students

    Science.gov (United States)

    Viakinnou-Brinson, Lucie; Herron, Carol; Cole, Steven P.; Haight, Carrie

    2012-01-01

    Grammar instruction is at the center of the target language (TL) and code-switching debate. Discussion revolves around whether grammar should be taught in the TL or using the TL and the native language (L1). This study investigated the effects of French-only grammar instruction and French/English grammar instruction on elementary-level students'…

  3. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  4. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  5. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  6. Adaptation to Climate Change in Risk and Vulnerability Analysis on a Municipal Level, a basis for further work

    International Nuclear Information System (INIS)

    Mossberg Sonnek, Karin; Lindberg, Anna; Lindgren, Johan

    2007-12-01

    The aim of Risk and Vulnerability Analysis (RVA) at local authority level in Sweden is to increase the capacity of local authorities to handle crises and to reduce vulnerability in the community. RVA processes could be an appropriate starting-point for discussions on how the community is influenced by climate change and how its effects could be reduced using various adjustment measures. In the report we present four methods: ROSA, MVA, IBERO and the Car Dun AB method. These have all been developed to support Swedish local authority RVA processes. We also present five international frameworks that have been developed by the organisations UNDP, USAID, UKCIP, C-CIARN and CSIRO to help decision-makers and stakeholders to adapt to climate change. Together, these descriptions form a foundation for continuing the work being done within the project Climatools, in which tools are being produced to be used by local authorities in adapting to climate change. In the report, we also discuss the concepts 'risk', 'vulnerability' and 'adaptation' and how analysis of adaptation to climate change has changed in recent years

  7. Development of a dose assessment computer code for the NPP severe accident at intermediate level - Korean case

    International Nuclear Information System (INIS)

    Cheong, J.H.; Lee, K.J.; Cho, H.Y.; Lim, J.H.

    1993-01-01

    A real-time dose assessment computer code named RADCON (RADiological CONsequence analysis) has been developed. An approximation method describing the distribution of radionuclides in a puff was proposed and implemented in the code. This method is expected to reduce the time required to calculate the cloud shine (external dose from radioactive plumes). RADCON can simulate an NPP emergency situation by considering complex topography and continuous washout phenomena and provide a function of effective emergency planning. To verify the code results, RADCON has been compared with RASCAL, which was developed for the U.S. NRC by ORNL, for eight hypothetical accident scenarios. Sensitivity analysis was also performed for the important input parameters. (2 tabs., 3 figs.)

  8. Cytogenetic monitoring, radiosensitivity study and adaptive response of workers exposed to low level ionizing radiation

    International Nuclear Information System (INIS)

    Peitl Junior, Paulo

    1996-01-01

    The objectives of the present study were: To determine the frequencies of chromosome aberrations in lymphocytes from individuals belonging to professionally exposed groups, under normal conditions; to determine the possible differences in radiosensitivity between the lymphocytes of technicians and controls after in vitro irradiation with gamma rays during the G 1 phase of the cell cycle (radiosensitivity study), and to examine the influence of in vivo and in vitro pre-exposure to low doses of radiation on the frequency of chromosome aberrations induced in vitro by high doses (study of the adaptive response) in a group of technicians (T) compared to controls (C). (author)

  9. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca Perez, Marco Antonio; Torres Aroche, Leonel Alberto; Cornejo, Nestor; Martin Hernandez, Guido

    2003-01-01

    The main objective of this work was estimate the voxels S values for 188 Re at cubical geometry using the MCNP-4C code for the simulation of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxels were estimated and reported for 188 Re and Y 90 . A comparison of voxels S values computed with the MCNP code the data reported in MIRD pamphlet 17 for 90 Y was performed in order to evaluate our results

  10. Used data bases to adapt the technical code of the construction (CTE) to the real climatology of Galicia; Bases de datos utilizadas para adaptar el codigo tecnico de la edificacion (CTE) a la climatologia real de Galicia

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez, M.; Izquierdo, P.; Pose, M.; Prado, M. T.; Santos, J.

    2008-07-01

    The paper explains the data base used in the research on the variables solar radiation, air ambient temperature, air ambient relative humidity, and rivers surface water temperature, that was realized to analyze and adapt the Spanish Technical Building Code (TBC) to the real climate of Galicia. the Data bases are those of the meteorological and ambient stations of organizations with wide nets in Galicia, and images of the Meteosat-6 satellite. (Author)

  11. An adaptive framework to differentiate receiving water quality impacts on a multi-scale level.

    Science.gov (United States)

    Blumensaat, F; Tränckner, J; Helm, B; Kroll, S; Dirckx, G; Krebs, P

    2013-01-01

    The paradigm shift in recent years towards sustainable and coherent water resources management on a river basin scale has changed the subject of investigations to a multi-scale problem representing a great challenge for all actors participating in the management process. In this regard, planning engineers often face an inherent conflict to provide reliable decision support for complex questions with a minimum of effort. This trend inevitably increases the risk to base decisions upon uncertain and unverified conclusions. This paper proposes an adaptive framework for integral planning that combines several concepts (flow balancing, water quality monitoring, process modelling, multi-objective assessment) to systematically evaluate management strategies for water quality improvement. As key element, an S/P matrix is introduced to structure the differentiation of relevant 'pressures' in affected regions, i.e. 'spatial units', which helps in handling complexity. The framework is applied to a small, but typical, catchment in Flanders, Belgium. The application to the real-life case shows: (1) the proposed approach is adaptive, covers problems of different spatial and temporal scale, efficiently reduces complexity and finally leads to a transparent solution; and (2) water quality and emission-based performance evaluation must be done jointly as an emission-based performance improvement does not necessarily lead to an improved water quality status, and an assessment solely focusing on water quality criteria may mask non-compliance with emission-based standards. Recommendations derived from the theoretical analysis have been put into practice.

  12. Adaptation improves face trustworthiness discrimination

    Directory of Open Access Journals (Sweden)

    Bruce D Keefe

    2013-06-01

    Full Text Available Adaptation to facial characteristics, such as gender and viewpoint, has been shown to both bias our perception of faces and improve facial discrimination. In this study, we examined whether adapting to two levels of face trustworthiness improved sensitivity around the adapted level. Facial trustworthiness was manipulated by morphing between trustworthy and untrustworthy prototypes, each generated by morphing eight trustworthy and eight untrustworthy faces respectively. In the first experiment, just-noticeable differences (JNDs were calculated for an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical, except that JNDs were calculated for a trustworthy face. In the third experiment we examined whether adapting to an untrustworthy male face improved discrimination to an untrustworthy female face. In all experiments, participants completed a two-interval forced-choice adaptive staircase procedure, in which they judged which face was more untrustworthy. JNDs were derived from a psychometric function fitted to the data. Adaptation improved sensitivity to faces conveying the same level of trustworthiness when compared to no adaptation. When adapting to and discriminating around a different level of face trustworthiness there was no improvement in sensitivity and JNDs were equivalent to those in the no adaptation condition. The improvement in sensitivity was found to occur even when adapting to a face with different gender and identity. These results suggest that adaptation to facial trustworthiness can selectively enhance mechanisms underlying the coding of facial trustworthiness to improve perceptual sensitivity. These findings have implications for the role of our visual experience in the decisions we make about the trustworthiness of other individuals.

  13. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 2. Special test cases

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-08-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. Volume 1, titled ''Guideline Approach,'' consists of Chapters 1 through 5 and a glossary. Chapters 2 through 5 provide the more detailed discussions about the code selection approach. This volume, Volume 2, consists of four appendices reporting on the technical evaluation test cases designed to help verify the accuracy of ground-water transport codes. 20 refs

  14. An Adaptive Neuro-Fuzzy Inference System for Sea Level Prediction Considering Tide-Generating Forces and Oceanic Thermal Expansion

    Directory of Open Access Journals (Sweden)

    Li-Ching Lin Hsien-Kuo Chang

    2008-01-01

    Full Text Available The paper presents an adaptive neuro fuzzy inference system for predicting sea level considering tide-generating forces and oceanic thermal expansion assuming a model of sea level dependence on sea surface temperature. The proposed model named TGFT-FN (Tide-Generating Forces considering sea surface Temperature and Fuzzy Neuro-network system is applied to predict tides at five tide gauge sites located in Taiwan and has the root mean square of error of about 7.3 - 15.0 cm. The capability of TGFT-FN model is superior in sea level prediction than the previous TGF-NN model developed by Chang and Lin (2006 that considers the tide-generating forces only. The TGFT-FN model is employed to train and predict the sea level of Hua-Lien station, and is also appropriate for the same prediction at the tide gauge sites next to Hua-Lien station.

  15. The influence of state-level policy environments on the activation of the Medicaid SBIRT reimbursement codes.

    Science.gov (United States)

    Hinde, Jesse; Bray, Jeremy; Kaiser, David; Mallonee, Erin

    2017-02-01

    To examine how institutional constraints, comprising federal actions and states' substance abuse policy environments, influence states' decisions to activate Medicaid reimbursement codes for screening and brief intervention for risky substance use in the United States. A discrete-time duration model was used to estimate the effect of institutional constraints on the likelihood of activating the Medicaid reimbursement codes. Primary constraints included federal Screening, Brief Intervention and Referral to Treatment (SBIRT) grant funding, substance abuse priority, economic climate, political climate and interstate diffusion. Study data came from publicly available secondary data sources. Federal SBIRT grant funding did not affect significantly the likelihood of activation (P = 0.628). A $1 increase in per-capita block grant funding was associated with a 10-percentage point reduction in the likelihood of activation (P = 0.003) and a $1 increase in per-capita state substance use disorder expenditures was associated with a 2-percentage point increase in the likelihood of activation (P = 0.004). States with enacted parity laws (P = 0.016) and a Democratic-controlled state government were also more likely to activate the codes. In the United States, the determinants of state activation of Medicaid Screening, Brief Intervention and Referral to Treatment (SBIRT) reimbursement codes are complex, and include more than financial considerations. Federal block grant funding is a strong disincentive to activating the SBIRT reimbursement codes, while more direct federal SBIRT grant funding has no detectable effects. © 2017 Society for the Study of Addiction.

  16. Understanding extreme sea levels for broad-scale coastal impact and adaptation analysis

    NARCIS (Netherlands)

    Wahl, T.; Haigh, I.D.; Nicholls, R.J.; Arns, A.; Dangendorf, S.; Hinkel, J.; Slangen, A.B.A.

    2017-01-01

    One of the main consequences of mean sea level rise (SLR) on human settlements is an increase in flood risk due to an increase in the intensity and frequency of extreme sea levels (ESL). While substantial research efforts are directed towards quantifying projections and uncertainties of future

  17. Cross-Cultural Competency Adaptability of Dental Hygiene Educators in Entry Level Dental Hygiene Programs

    Science.gov (United States)

    Engeswick, Lynnette Marie

    2011-01-01

    This study was conducted to discover the extent dental hygiene educators in 25 entry-level dental hygiene programs from the Upper Midwest demonstrate Emotional Resilience, Flexibility and Openness, Perceptual Acuity, and Personal Autonomy as they relate to their level of education and multicultural experiences. An additional purpose was to examine…

  18. The effect of cognitive load on adaptation to differences in steering wheel force feedback level

    NARCIS (Netherlands)

    Anand, S.; Terken, J.; Hogema, J.

    2013-01-01

    In an earlier study it was found that drivers can adjust quickly to different force feedback levels on the steering wheel, even for such extreme levels as zero feedback. It was hypothesized that, due to lack of cognitive load, participants could easily and quickly learn how to deal with extreme

  19. Forecasting Water Level Fluctuations of Urmieh Lake Using Gene Expression Programming and Adaptive Neuro-Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Sepideh Karimi

    2012-06-01

    Full Text Available Forecasting lake level at various prediction intervals is an essential issue in such industrial applications as navigation, water resource planning and catchment management. In the present study, two data driven techniques, namely Gene Expression Programming and Adaptive Neuro-Fuzzy Inference System, were applied for predicting daily lake levels for three prediction intervals. Daily water-level data from Urmieh Lake in Northwestern Iran were used to train, test and validate the used techniques. Three statistical indexes, coefficient of determination, root mean square error and variance accounted for were used to assess the performance of the used techniques. Technique inter-comparisons demonstrated that the GEP surpassed the ANFIS model at each of the prediction intervals. A traditional auto regressive moving average model was also applied to the same data sets; the obtained results were compared with those of the data driven approaches demonstrating superiority of the data driven models to ARMA.

  20. Responding to Sea Level Rise: Does Short-Term Risk Reduction Inhibit Successful Long-Term Adaptation?

    Science.gov (United States)

    Keeler, A. G.; McNamara, D. E.; Irish, J. L.

    2018-04-01

    Most existing coastal climate-adaptation planning processes, and the research supporting them, tightly focus on how to use land use planning, policy tools, and infrastructure spending to reduce risks from rising seas and changing storm conditions. While central to community response to sea level rise, we argue that the exclusive nature of this focus biases against and delays decisions to take more discontinuous, yet proactive, actions to adapt—for example, relocation and aggressive individual protection investments. Public policies should anticipate real estate market responses to risk reduction to avoid large costs—social and financial—when and if sea level rise and other climate-related factors elevate the risks to such high levels that discontinuous responses become the least bad alternative.

  1. Local adaptation at the transcriptome level in brown trout: evidence from early life history temperature genomic reaction norms.

    Directory of Open Access Journals (Sweden)

    Kristian Meier

    Full Text Available Local adaptation and its underlying molecular basis has long been a key focus in evolutionary biology. There has recently been increased interest in the evolutionary role of plasticity and the molecular mechanisms underlying local adaptation. Using transcriptome analysis, we assessed differences in gene expression profiles for three brown trout (Salmo trutta populations, one resident and two anadromous, experiencing different temperature regimes in the wild. The study was based on an F2 generation raised in a common garden setting. A previous study of the F1 generation revealed different reaction norms and significantly higher QST than FST among populations for two early life-history traits. In the present study we investigated if genomic reaction norm patterns were also present at the transcriptome level. Eggs from the three populations were incubated at two temperatures (5 and 8 degrees C representing conditions encountered in the local environments. Global gene expression for fry at the stage of first feeding was analysed using a 32k cDNA microarray. The results revealed differences in gene expression between populations and temperatures and population × temperature interactions, the latter indicating locally adapted reaction norms. Moreover, the reaction norms paralleled those observed previously at early life-history traits. We identified 90 cDNA clones among the genes with an interaction effect that were differently expressed between the ecologically divergent populations. These included genes involved in immune- and stress response. We observed less plasticity in the resident as compared to the anadromous populations, possibly reflecting that the degree of environmental heterogeneity encountered by individuals throughout their life cycle will select for variable level of phenotypic plasticity at the transcriptome level. Our study demonstrates the usefulness of transcriptome approaches to identify genes with different temperature reaction

  2. Saturated Adaptive Output-Feedback Power-Level Control for Modular High Temperature Gas-Cooled Reactors

    Directory of Open Access Journals (Sweden)

    Zhe Dong

    2014-11-01

    Full Text Available Small modular reactors (SMRs are those nuclear fission reactors with electrical output powers of less than 300 MWe. Due to its inherent safety features, the modular high temperature gas-cooled reactor (MHTGR has been seen as one of the best candidates for building SMR-based nuclear plants with high safety-level and economical competitive power. Power-level control is crucial in providing grid-appropriation for all types of SMRs. Usually, there exists nonlinearity, parameter uncertainty and control input saturation in the SMR-based plant dynamics. Motivated by this, a novel saturated adaptive output-feedback power-level control of the MHTGR is proposed in this paper. This newly-built control law has the virtues of having relatively neat form, of being strong adaptive to parameter uncertainty and of being able to compensate control input saturation, which are given by constructing Lyapunov functions based upon the shifted-ectropies of neutron kinetics and reactor thermal-hydraulics, giving an online tuning algorithm for the controller parameters and proposing a control input saturation compensator respectively. It is proved theoretically that input-to-state stability (ISS can be guaranteed for the corresponding closed-loop system. In order to verify the theoretical results, this new control strategy is then applied to the large-range power maneuvering control for the MHTGR of the HTR-PM plant. Numerical simulation results show not only the relationship between regulating performance and control input saturation bound but also the feasibility of applying this saturated adaptive control law practically.

  3. System level mechanisms of adaptation, learning, memory formation and evolvability: the role of chaperone and other networks.

    Science.gov (United States)

    Gyurko, David M; Soti, Csaba; Stetak, Attila; Csermely, Peter

    2014-05-01

    During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides ' learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of young organisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of old organisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.

  4. A Simplified Technique for Implant-Abutment Level Impression after Soft Tissue Adaptation around Provisional Restoration

    Science.gov (United States)

    Kutkut, Ahmad; Abu-Hammad, Osama; Frazer, Robert

    2016-01-01

    Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations. PMID:29563457

  5. A Simplified Technique for Implant-Abutment Level Impression after Soft Tissue Adaptation around Provisional Restoration

    Directory of Open Access Journals (Sweden)

    Ahmad Kutkut

    2016-05-01

    Full Text Available Impression techniques for implant restorations can be implant level or abutment level impressions with open tray or closed tray techniques. Conventional implant-abutment level impression techniques are predictable for maximizing esthetic outcomes. Restoration of the implant traditionally requires the use of the metal or plastic impression copings, analogs, and laboratory components. Simplifying the dental implant restoration by reducing armamentarium through incorporating conventional techniques used daily for crowns and bridges will allow more general dentists to restore implants in their practices. The demonstrated technique is useful when modifications to implant abutments are required to correct the angulation of malpositioned implants. This technique utilizes conventional crown and bridge impression techniques. As an added benefit, it reduces costs by utilizing techniques used daily for crowns and bridges. The aim of this report is to describe a simplified conventional impression technique for custom abutments and modified prefabricated solid abutments for definitive restorations.

  6. Snowshoe hare multi-level habitat use in a fire-adapted ecosystem

    Science.gov (United States)

    Gigliotti, Laura C.; Jones, Benjamin C.; Lovallo, Matthew J.; Diefenbach, Duane R.

    2018-01-01

    Prescribed burning has the potential to improve habitat for species that depend on pyric ecosystems or other early successional vegetation types. For species that occupy diverse plant communities over the extent of their range, response to disturbances such as fire might vary based on post-disturbance vegetation dynamics among plant communities. Although responses of snowshoe hares (Lepus americanus) to fire have been studied in conifer-dominated forests in northern parts of the species’ range, there is a lack of information on snowshoe hare habitat use in fire-dependent communities in southern parts of their range. We used global positioning system (GPS) and very high frequency (VHF) radio-collars to monitor the habitat use of 32 snowshoe hares in a scrub-oak (Quercus ilicifolia)-pitch pine (Pinus rigida) barrens complex in northeastern Pennsylvania where prescribed fire has been used for habitat restoration. The area contained stands that underwent prescribed burning 1–6 years prior to our study. Also, we investigated fine-scale determinants of habitat use within stands. We found that regardless of season, hares did not select for areas that had been burned within 6 years prior. Hares primarily used stands of older scrub oak, conifer, or hardwoods, which contained dense understory vegetation and canopy cover. Hare habitat use also was positively associated with stand edges. Our results suggest that hares do not respond to prescribed burning of scrub oak in the short-term. In addition, by focusing on structural determinants of habitat use, rather than broad-scale characteristics such as stand type, management strategies for snowshoe hares can be adapted over the extent of their range despite the multitude of different land cover types across which the species occurs. 

  7. Farm level adaptation to climate change : the case of farmer’s in the ethiopian highlands

    NARCIS (Netherlands)

    Gidey Gebrehiwot, T.; Gidey, T.G.; van der Veen, A.

    2013-01-01

    In Ethiopia, climate change and associated risks are expected to have serious consequences for agriculture and food security. This in turn will seriously impact on the welfare of the people, particularly the rural farmers whose main livelihood depends on rain-fed agriculture. The level of impacts

  8. Reaction and Adaptation to the Birth of a Child: A Couple-Level Analysis

    Science.gov (United States)

    Dyrdal, Gunvor Marie; Lucas, Richard E.

    2013-01-01

    The present study explored how life satisfaction changes before and after childbirth among first-time parents from a nationally representative, longitudinal study of Germans. Life satisfaction increased before pregnancy to a peak just after birth and then returned to the baseline level within 2 years postpartum. The 2 members of the same couple…

  9. High-level face shape adaptation depends on visual awareness : Evidence from continuous flash suppression

    NARCIS (Netherlands)

    Stein, T.; Sterzer, P.

    When incompatible images are presented to the two eyes, one image dominates awareness while the other is rendered invisible by interocular suppression. It has remained unclear whether complex visual information can reach high-level processing stages in the ventral visual pathway during such

  10. Survey of existing literature in the field of shock-absorbing materials with a view to subsequent adaptation of plastic deformation codes. Phase 1

    International Nuclear Information System (INIS)

    Draulans, J.; Fabry, J.P.; Lafontaine, I.; Richel, H.; Guyette, M.

    1985-01-01

    Shock-absorbing materials and structures can be used as part of the transport container structure or of the truck equipment. An extensive survey of the literature has provided much information. Investigation has been made to define the required experimental procedures necessary to measure the misssing material properties. Three codes had been selected: EURDYN, MARC-CDC and SAMCEF. For code evaluation, a schematic container model has been considered to serve as a benchmark for the evaluation of plastic deformation. For the shock-calculation, the container falls from a height of 9 meters along the direction of its cylinder axis on an unyielded flat surface. The EURDYN computer code, has been selected first as it is especially designed to handle dynamic problems, preferably plastic ones. Indeed, EURDYN uses an explicit integration scheme versus time, which makes it quite efficient to run short deformation processes such as absorber collapses. The SAMCEF computer code could not readily calculate the benchmark, also a visco-plastic flow model has been added to it. The MARC computer code was supposed to be a candidate to run shock-calculation but extensive computing time and engineering efforts would be required, it was replaced by the PLEXUS code. The results obtained using the SAMCEF programme confirm those obtained with EURDYN. The PLEXUS results are in between. The proposed benchmark calculation is at the border of the capabilities of the most advanced computer codes for plastic-dynamic calculations. Indeed, a complex energy absorption process seems to take place in a narrow region, moving versus time, where very large shape inversions occur. That requires an accurate modelling of the system in the deformed regions and a skilful choice of the numerical parameters of the computer run. The three tested codes gave qualitatively consistent results and confirm some scarce experimental results

  11. Are the EU Member States Ready for the New Union Customs Code: Emerging Legal Issues On the National Level

    Directory of Open Access Journals (Sweden)

    Valantiejus Gediminas

    2017-06-01

    Full Text Available In 2016, the European Union has launched a new and ambitious project for the future regulation of international trade in the European Union and the rules of its taxation: since the 1 May 2016, the new Union Customs Code (UCC has entered into force. It revokes the old Community Customs Code (CCC, which was applied since 1992, and passed in the form of EU regulation sets brand-new rules for the application of Common Customs Tariff and calculation of customs duties (tariffs in all the EU Member States. It is oriented to the creation of the paperless environment for the formalisation of international trade operations (full electronic declaration of customs procedures and ensuring of a more uniform administration of customs duties in the tax and customs authorities of the Member States in the European Union. Therefore, the article raises and seeks to answer the problematic question whether the Member States of the European Union themselves are ready to implement these ambitious goals and does the actual practice of the Member States support that (considering the practice of the Republic of Lithuania. The research, which is based on the analysis of case law in the Republic of Lithuania (case study of recent tax disputes between the taxpayers and customs authorities that arose immediately before and after the entry into force of the UCC, leads to the conclusion that many problematic areas that may negatively impact the functioning of the new Customs Code remain and must be improved, including an adoption of new legislative solutions.

  12. Hybrid Strategies for Link Adaptation Exploiting Several Degrees of Freedom in OFDM Based Broadband Wireless Systems

    DEFF Research Database (Denmark)

    Das, Suvra S.; Rahman, Muhammad Imadur; Wang, Yuanye

    2007-01-01

    In orthogonal frequency division multiplexing (OFDM) systems, there are several degrees of freedom in time and frequency domain, such as, sub-band size, forward error control coding (FEC) rate, modulation order, power level, modulation adaptation interval, coding rate adaptation interval and powe...... of the link parameters based on the channel conditions would lead to highly complex systems with high overhead. Hybrid strategies to vary the adaptation rates to tradeoff achievable efficiency and complexity are presented in this work....

  13. Functional and biochemical adaptations of elite level futsal players from Brazil along a training season

    Directory of Open Access Journals (Sweden)

    Rômulo Pillon Barcelos

    2017-01-01

    Conclusions: The tapering strategy was successful considering players presented lower levels of muscle damage, inflammation and oxidative stress makers before T2, which preceded the main championship of the year. These results are of great relevance, considering the team won the FIFA®-Intercontinental-Futsal-Cup, which happened at T2. Thus, it seems that routine-based biochemical markers may be useful as training control means in this population.

  14. FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation

    OpenAIRE

    Hoffman, Judy; Wang, Dequan; Yu, Fisher; Darrell, Trevor

    2016-01-01

    Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and/or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first do...

  15. Hypothesis Tests for Bernoulli Experiments: Ordering the Sample Space by Bayes Factors and Using Adaptive Significance Levels for Decisions

    Directory of Open Access Journals (Sweden)

    Carlos A. de B. Pereira

    2017-12-01

    Full Text Available The main objective of this paper is to find the relation between the adaptive significance level presented here and the sample size. We statisticians know of the inconsistency, or paradox, in the current classical tests of significance that are based on p-value statistics that are compared to the canonical significance levels (10%, 5%, and 1%: “Raise the sample to reject the null hypothesis” is the recommendation of some ill-advised scientists! This paper will show that it is possible to eliminate this problem of significance tests. We present here the beginning of a larger research project. The intention is to extend its use to more complex applications such as survival analysis, reliability tests, and other areas. The main tools used here are the Bayes factor and the extended Neyman–Pearson Lemma.

  16. Development and verification of a leningrad NPP unit 1 living PSA model in the INL SAPHIRE code format for prompt operational safety level monitoring

    International Nuclear Information System (INIS)

    Bronislav, Vinnikov

    2007-01-01

    The first part of the paper presents results of the work, that was carried out in complete conformity with the Technical Assignment, which was developed by the Leningrad Nuclear Power Plant. The initial scientific and technical information, contained into the In-Depth Safety Assessment Reports, was given to the author of the work. This information included graphical Fault Trees of Safety Systems and Auxiliary Technical Systems, Event Trees for the necessary number of Initial Events, and also information about failure probabilities of basic components of the nuclear unit. On the basis of this information and fueling it to the Usa Idaho National Laboratory (INL) SAPHIRE code, we have developed an electronic version of the Data Base for failure probabilities of the components of technical systems. Then, we have developed both the electronic versions of the necessary Fault Trees, and an electronic versions of the necessary Event Trees. And at last, we have carried out the linkage of the Event Trees. This work has resulted in the Living PSA (LPSA - Living Probabilistic Safety Assessment) Model of the Leningrad NPP Unit 1. The LPSA-model is completely adapted to be consistent with the USA INL SAPHIRE Risk Monitor. The second part of the paper results in analysis of fire consequences in various places of Leningrad NPP Unit 1. The computations were carried out with the help of the LPSA-model, developed in SAPHIRE code format. On the basis of the computations the order of priority of implementation of fire prevention measures was established. (author)

  17. Rural Livelihoods, Climate Change and Micro-Level Adaptive Capacity in the Greater Mekong Subregion

    DEFF Research Database (Denmark)

    Jiao, Xi

    The Greater Mekong Subregion (GMS) is one of the fastest developing regions in the world, experiencing significant economic, environmental and social transformations. There is an increasing demand for policy relevant and decision support information at micro level. This PhD research contributes...... and Laos, two of the poorest countries in the GMS. Structured household surveys and participatory focus group discussions were the primary data collection methods. The findings provide new, additional and much needed quantitative information in the region, and several policy implications for rural...

  18. Between “design” and “bricolage”: Genetic networks, levels of selection, and adaptive evolution

    Science.gov (United States)

    Wilkins, Adam S.

    2007-01-01

    The extent to which “developmental constraints” in complex organisms restrict evolutionary directions remains contentious. Yet, other forms of internal constraint, which have received less attention, may also exist. It will be argued here that a set of partial constraints below the level of phenotypes, those involving genes and molecules, influences and channels the set of possible evolutionary trajectories. At the top-most organizational level there are the genetic network modules, whose operations directly underlie complex morphological traits. The properties of these network modules, however, have themselves been set by the evolutionary history of the component genes and their interactions. Characterization of the components, structures, and operational dynamics of specific genetic networks should lead to a better understanding not only of the morphological traits they underlie but of the biases that influence the directions of evolutionary change. Furthermore, such knowledge may permit assessment of the relative degrees of probability of short evolutionary trajectories, those on the microevolutionary scale. In effect, a “network perspective” may help transform evolutionary biology into a scientific enterprise with greater predictive capability than it has hitherto possessed. PMID:17494754

  19. Between "design" and "bricolage": genetic networks, levels of selection, and adaptive evolution.

    Science.gov (United States)

    Wilkins, Adam S

    2007-05-15

    The extent to which "developmental constraints" in complex organisms restrict evolutionary directions remains contentious. Yet, other forms of internal constraint, which have received less attention, may also exist. It will be argued here that a set of partial constraints below the level of phenotypes, those involving genes and molecules, influences and channels the set of possible evolutionary trajectories. At the top-most organizational level there are the genetic network modules, whose operations directly underlie complex morphological traits. The properties of these network modules, however, have themselves been set by the evolutionary history of the component genes and their interactions. Characterization of the components, structures, and operational dynamics of specific genetic networks should lead to a better understanding not only of the morphological traits they underlie but of the biases that influence the directions of evolutionary change. Furthermore, such knowledge may permit assessment of the relative degrees of probability of short evolutionary trajectories, those on the microevolutionary scale. In effect, a "network perspective" may help transform evolutionary biology into a scientific enterprise with greater predictive capability than it has hitherto possessed.

  20. Evaluation of image quality and radiation dose by adaptive statistical iterative reconstruction technique level for chest CT examination.

    Science.gov (United States)

    Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol

    2013-12-01

    The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.

  1. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs.

  2. The contribution to immediate serial recall of rehearsal, search speed, access to lexical memory, and phonological coding: an investigation at the construct level.

    Science.gov (United States)

    Tehan, Gerald; Fogarty, Gerard; Ryan, Katherine

    2004-07-01

    Rehearsal speed has traditionally been seen to be the prime determinant of individual differences in memory span. Recent studies, in the main using young children as the participant population, have suggested other contributors to span performance. In the present research, we used structural equation modeling to explore, at the construct level, individual differences in immediate serial recall with respect to rehearsal, search, phonological coding, and speed of access to lexical memory. We replicated standard short-term phenomena; we showed that the variables that influence children's span performance influence adult performance in the same way; and we showed that speed of access to lexical memory and facility with phonological codes appear to be more potent sources of individual differences in immediate memory than is either rehearsal speed or search factors.

  3. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs

  4. Adapting Judicial Supervision to the Risk Level of Drug Offenders: Discharge and 6-month Outcomes from a Prospective Matching Study

    Science.gov (United States)

    Marlowe, Douglas B.; Festinger, David S.; Dugosh, Karen L.; Lee, Patricia A.; Benasutti, Kathleen M.

    2007-01-01

    This article reports recent findings from a program of experimental research examining the effects of adapting judicial supervision to the risk level of drug-abusing offenders. Prior studies revealed that high-risk participants with (1) antisocial personality disorder or (2) a history of drug abuse treatment performed significantly better in drug court when they were scheduled to attend frequent, bi-weekly judicial status hearings in court. Low-risk participants performed equivalently regardless of the schedule of court hearings. The current study prospectively matched misdemeanor drug court clients to the optimal schedule of court hearings based upon an assessment of their risk status, and compared outcomes to those of clients randomly assigned to the standard schedule of court hearings. Results confirmed that high-risk participants graduated at a higher rate, provided more drug-negative urine specimens at 6 months post-admission, and reported significantly less drug use and alcohol intoxication at 6 months post-admission when they were matched to bi-weekly hearings as compared to the usual schedule of hearings. These findings yield practical information for enhancing the efficacy and cost-efficiency of drug court services. Directions for future research on adaptive programming for drug offenders are discussed. PMID:17071020

  5. Skeletal Adaptations to Different Levels of Eccentric Resistance Following Eight Weeks of Training

    Science.gov (United States)

    English, Kirk L.; Loehr, James A.; Lee, Stuart M. C.; Maddocks, Mary J.; Laughlin, Mitzi S.; Hagan, R. Donald

    2007-01-01

    Coupled concentric-eccentric resistive exercise maintains bone mineral density (BMD) during bed rest and aging. PURPOSE: We hypothesized that 8 wks of lower body resistive exercise training with higher ratios of eccentric to concentric loading would enhance hip and lumbar BMD. METHODS: Forty untrained male volunteers (34.9+/-7.0 yrs, 80.9+/-9.8 kg, 178.2+/-7.1 cm; mean+/-SD) were matched for leg press (LP) 1-Repetition Maximum (1-RM) strength and randomly assigned to one of 5 training groups. Concentric load (% 1-RM) was constant across groups, but each group trained with different levels of eccentric load (0, 33, 66, 100, or 138% of concentric) for all training sessions. Subjects performed a periodized supine LP and heel raise (HR) training program 3 d wk-1 for 8 wks using a modified Agaton Fitness System (Agaton Fitness AB, Boden, Sweden). Hip and lumbar BMD (g/sq cm) was measured in triplicate pre- and post-training using DXA (Hologic Discovery ). Pre- and post-training means were compared using the appropriate ANOVA and Tukey's post hoc tests. Within group pre- to post-training BMD was compared using paired t-tests with a Bonferroni adjustment. RESULTS: There was a main effect of training on L1, L2, L3, L4, total lumbar, and greater trochanter BMD, but there were no differences between groups. CONCLUSION: Eights wks of lower body resistive exercise increased greater trochanter and lumbar BMD. Inability to detect group differences may have been influenced by a potentially osteogenic vibration associated with device operation in the 0, 33, and 66% groups.

  6. Adaptation of computer code ALMOD 3.4 for safety analyses of Westighouse type NPPs and calculation of main feedwater loss

    International Nuclear Information System (INIS)

    Kordis, I.; Jerele, A.; Brajak, F.

    1986-01-01

    The paper presents theoretical foundations of ALMOD 3.4 code and modification done in order to adjust the model to westinghouse type NPP. test cases for verification of added modules functioning were done and loss of main feedwater (FW) transient at nominal power was analysed. (author)

  7. Modeled Sea Level Rise Impacts on Coastal Ecosystems at Six Major Estuaries on Florida's Gulf Coast: Implications for Adaptation Planning.

    Science.gov (United States)

    Geselbracht, Laura L; Freeman, Kathleen; Birch, Anne P; Brenner, Jorge; Gordon, Doria R

    2015-01-01

    The Sea Level Affecting Marshes Model (SLAMM) was applied at six major estuaries along Florida's Gulf Coast (Pensacola Bay, St. Andrews/Choctawhatchee Bays, Apalachicola Bay, Southern Big Bend, Tampa Bay and Charlotte Harbor) to provide quantitative and spatial information on how coastal ecosystems may change with sea level rise (SLR) and to identify how this information can be used to inform adaption planning. High resolution LiDAR-derived elevation data was utilized under three SLR scenarios: 0.7 m, 1 m and 2 m through the year 2100 and uncertainty analyses were conducted on selected input parameters at three sites. Results indicate that the extent, spatial orientation and relative composition of coastal ecosystems at the study areas may substantially change with SLR. Under the 1 m SLR scenario, total predicted impacts for all study areas indicate that coastal forest (-69,308 ha; -18%), undeveloped dry land (-28,444 ha; -2%) and tidal flat (-25,556 ha; -47%) will likely face the greatest loss in cover by the year 2100. The largest potential gains in cover were predicted for saltmarsh (+32,922 ha; +88%), transitional saltmarsh (+23,645 ha; na) and mangrove forest (+12,583 ha; +40%). The Charlotte Harbor and Tampa Bay study areas were predicted to experience the greatest net loss in coastal wetlands The uncertainty analyses revealed low to moderate changes in results when some numerical SLAMM input parameters were varied highlighting the value of collecting long-term sedimentation, accretion and erosion data to improve SLAMM precision. The changes predicted by SLAMM will affect exposure of adjacent human communities to coastal hazards and ecosystem functions potentially resulting in impacts to property values, infrastructure investment and insurance rates. The results and process presented here can be used as a guide for communities vulnerable to SLR to identify and prioritize adaptation strategies that slow and/or accommodate the changes underway.

  8. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  9. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)

    1996-08-01

    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  10. Cellular Automata as an Example for Advanced Beginners’ Level Coding Exercises in a MOOC on Test Driven Development

    Directory of Open Access Journals (Sweden)

    Thomas Staubitz

    2017-06-01

    Full Text Available Programming tasks are an important part of teaching computer programming as they foster students to develop essential programming skills and techniques through practice.  The design of educational problems plays a crucial role in the extent to which the experiential knowledge is imparted to the learner both in terms of quality and quantity. Badly designed tasks have been known to put-off students from practicing programming. Hence, there is a need for carefully designed problems. Cellular Automata programming lends itself as a very suitable candidate among problems designed for programming practice. In this paper, we describe how various types of problems can be designed using concepts from Cellular Automata and discuss the features which make them good practice problems with regard to instructional pedagogy. We also present a case study on a Cellular Automata programming exercise used in a MOOC on Test Driven Development using JUnit, and discuss the automated evaluation of code submissions and the feedback about the reception of this exercise by participants in this course. Finally, we suggest two ideas to facilitate an easier approach of creating such programming exercises.

  11. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  12. Adapting the notion of natural (geological) barrier for final disposal of low- and intermediate-level radioactive wastes in Romania

    International Nuclear Information System (INIS)

    Durdun, I.; Marunteanu, C.; Andrei, V.

    2001-01-01

    According to the Minimum Disturbances Design (MDD) notion by Carl-Olof Morfeldt of Mineconsult, Sweden, any site selection, design and construction of low- and intermediate-level radioactive waste repository should be based on a thorough knowledge of the geological environmental so that the implantation of the disposal facility induce no significant harmful consequences. This work presents the way in which the Romanian program of radioactive waste management was implemented for disposal of low- and intermediate-level radioactive wastes from Cernavoda NPP. Based on geological criteria of selection of lithologic, petrographic, tectonic, seismologic, hydrologic and geo-technic nature, 37 sites were analyzed from which 2 were retained and finally one, Saligny site, was chosen, as the most close to Cernavoda NPP. Also, public acceptance and transport connections were taken into consideration. SUTRA, SWMS-2D and CHAIN-2D codes were applied to analyze the safety and the geological barrier effects. The barrier consists in red clay, a smectitic mineralogic compound. The computation showed that in Saligny vault the maximal tritium extension is kept inside due to the red clay barrier. Geo-technical engineering works were conducted to improve the properties of the loess upper layer which resulted in lowering its sensitivity to moistening and erosion

  13. Psacoin level S intercomparison: An International code intercomparison exercise on a hypothetical safety assessment case study for radioactive waste disposal systems

    International Nuclear Information System (INIS)

    1993-06-01

    This report documents the Level S exercise of the Probabilistic System Assessment Group (PSAG). Level S is the fifth in a series of Probabilistic Code Intercomparison (PSACOIN) exercises designed to contribute to the verification of probabilistic codes and methodologies that may be used in assessing the safety of radioactive waste disposal systems and concepts. The focus of the Level S exercise lies on sensitivity analysis. Given a common data set of model output and input values the participants were asked to identify both the underlying model's most important parameters (deterministic sensitivity analysis) and the link between the distributions of the input and output values (distribution sensitivity analysis). Agreement was generally found where it was expected and the exercise has achieved its objectives in acting as a focus for testing and discussing sensitivity analysis issues. Among the outstanding issues that have been identified are: (i) that techniques for distribution sensitivity analysis are needed that avoid the problem of statistical noise; (ii) that further investigations are warranted on the most appropriate way of handling large numbers of effectively zero results generated by Monte Carlo sampling; and (iii) that methods need to be developed for demonstrating that the results of sensitivity analysis are indeed correct

  14. Adaptive Lighting

    DEFF Research Database (Denmark)

    Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper

    2015-01-01

    Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...

  15. Subband coding of digital audio signals without loss of quality

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Breeuwer, Marcel; van de Waal, Robbert

    1989-01-01

    A subband coding system for high quality digital audio signals is described. To achieve low bit rates at a high quality level, it exploits the simultaneous masking effect of the human ear. It is shown how this effect can be used in an adaptive bit-allocation scheme. The proposed approach has been

  16. SU-E-E-07: An Adaptable Approach for Education On Medical Physics at Undergraduate and Postgraduate Levels

    International Nuclear Information System (INIS)

    Miller-Clemente, R; Mendez-Perez, L

    2015-01-01

    Purpose: To contribute to the professional profile of future medical physicists, technologists and physicians, and implement an adaptable educational strategy at both undergraduate and postgraduate levels. Methods: The Medical Physics Block of Electives (MPBE) designed was adapted to the Program of B.S. in Physics. The conferences and practical activities were developed with participatory methods, with interdisciplinary collaboration from research institutions and hospitals engaged on projects of Research, Development and Innovation (RDI). The scientific education was implemented by means of critical analysis of scientific papers and seminars where students debated on solutions for real research problems faced by medical physicists. This approach included courses for graduates not associated to educational programs of Medical Physics (MP). Results: The implementation of the MPBE began in September 2014, with the electives of Radiation MP and Introduction to Nuclear Magnetic Resonance. The students of second year received an Introduction to MP. This initiative was validated by the departmental Methodological Workshop, which promoted the full implementation of the MPBE. Both postgraduated and undergraduate trainees participated in practices with our DICOM viewer system, a local prototype for photoplethysmography and a home-made interface for ROC analysis, built with MATLAB. All these tools were designed and constructed in previous RDI projects. The collaborative supervision of University’s researchers with clinical medical physicists will allow to overcome the limitations of residency in hospitals, to reduce the workload for clinical supervisors and develop appropriate educational activities. Conclusion: We demonstrated the feasibility of adaptable educational strategies, considering available resources. This provides an innovative way for prospective medical physicists, technologists and radiation oncologists. This strategy can be implemented in several regions

  17. SU-E-E-07: An Adaptable Approach for Education On Medical Physics at Undergraduate and Postgraduate Levels

    Energy Technology Data Exchange (ETDEWEB)

    Miller-Clemente, R [Centro de Biofisica Medica, Santiago De Cuba, Santiago de Cuba (Cuba); Universidad de Oriente, Santiago De Cuba, Santiago de Cuba (Cuba); Mendez-Perez, L [Universidad de Oriente, Santiago De Cuba, Santiago de Cuba (Cuba)

    2015-06-15

    Purpose: To contribute to the professional profile of future medical physicists, technologists and physicians, and implement an adaptable educational strategy at both undergraduate and postgraduate levels. Methods: The Medical Physics Block of Electives (MPBE) designed was adapted to the Program of B.S. in Physics. The conferences and practical activities were developed with participatory methods, with interdisciplinary collaboration from research institutions and hospitals engaged on projects of Research, Development and Innovation (RDI). The scientific education was implemented by means of critical analysis of scientific papers and seminars where students debated on solutions for real research problems faced by medical physicists. This approach included courses for graduates not associated to educational programs of Medical Physics (MP). Results: The implementation of the MPBE began in September 2014, with the electives of Radiation MP and Introduction to Nuclear Magnetic Resonance. The students of second year received an Introduction to MP. This initiative was validated by the departmental Methodological Workshop, which promoted the full implementation of the MPBE. Both postgraduated and undergraduate trainees participated in practices with our DICOM viewer system, a local prototype for photoplethysmography and a home-made interface for ROC analysis, built with MATLAB. All these tools were designed and constructed in previous RDI projects. The collaborative supervision of University’s researchers with clinical medical physicists will allow to overcome the limitations of residency in hospitals, to reduce the workload for clinical supervisors and develop appropriate educational activities. Conclusion: We demonstrated the feasibility of adaptable educational strategies, considering available resources. This provides an innovative way for prospective medical physicists, technologists and radiation oncologists. This strategy can be implemented in several regions

  18. Design of a Two-level Adaptive Multi-Agent System for Malaria Vectors driven by an ontology

    Directory of Open Access Journals (Sweden)

    Etang Josiane

    2007-07-01

    Full Text Available Abstract Background The understanding of heterogeneities in disease transmission dynamics as far as malaria vectors are concerned is a big challenge. Many studies while tackling this problem don't find exact models to explain the malaria vectors propagation. Methods To solve the problem we define an Adaptive Multi-Agent System (AMAS which has the property to be elastic and is a two-level system as well. This AMAS is a dynamic system where the two levels are linked by an Ontology which allows it to function as a reduced system and as an extended system. In a primary level, the AMAS comprises organization agents and in a secondary level, it is constituted of analysis agents. Its entry point, a User Interface Agent, can reproduce itself because it is given a minimum of background knowledge and it learns appropriate "behavior" from the user in the presence of ambiguous queries and from other agents of the AMAS in other situations. Results Some of the outputs of our system present a series of tables, diagrams showing some factors like Entomological parameters of malaria transmission, Percentages of malaria transmission per malaria vectors, Entomological inoculation rate. Many others parameters can be produced by the system depending on the inputted data. Conclusion Our approach is an intelligent one which differs from statistical approaches that are sometimes used in the field. This intelligent approach aligns itself with the distributed artificial intelligence. In terms of fight against malaria disease our system offers opportunities of reducing efforts of human resources who are not obliged to cover the entire territory while conducting surveys. Secondly the AMAS can determine the presence or the absence of malaria vectors even when specific data have not been collected in the geographical area. In the difference of a statistical technique, in our case the projection of the results in the field can sometimes appeared to be more general.

  19. Timeline of changes in adaptive physiological responses, at the level of energy expenditure, with progressive weight loss.

    Science.gov (United States)

    Nymo, Siren; Coutinho, Silvia R; Torgersen, Linn-Christin H; Bomo, Ola J; Haugvaldstad, Ingrid; Truby, Helen; Kulseng, Bård; Martins, Catia

    2018-05-07

    Diet-induced weight loss (WL) is associated with reduced resting and non-resting energy expenditure (EE), driven not only by changes in body composition but also potentially by adaptive thermogenesis (AT). When exactly this happens, during progressive WL, remains unknown. The aim of this study was to determine the timeline of changes in RMR and exercise-induced EE (EIEE), stemming from changes in body composition v. the presence of AT, during WL with a very-low-energy diet (VLED). In all, thirty-one adults (eighteen men) with obesity (BMI: 37 (sem 4·5) kg/m2; age: 43 (sem 10) years) underwent 8 weeks of a VLED, followed by 4 weeks of weight maintenance. Body weight and composition, RMR, net EIEE (10, 25 and 50 W) and AT (for RMR (ATRMR) and EIEE (ATEIEE)) were measured at baseline, day 3 (2 (sem 1) % WL), after 5 and 10 % WL and at weeks 9 (16 (sem 2) %) and 13 (16 (sem 1) %). RMR and fat mass were significantly reduced for the first time at 5 % WL (12 (sem 8) d) (P<0·01 and P<0·001, respectively) and EIEE at 10 % WL (32 (sem 8) d), for all levels of power (P<0·05), and sustained up to week 13. ATRMR was transiently present at 10 % WL (-460 (sem 690) kJ/d, P<0·01). A fall in RMR should be anticipated at ≥5 % WL and a reduction in EIEE at ≥10 % WL. Transient ATRMR can be expected at 10 % WL. These physiological adaptations may make progressive WL difficult and will probably contribute to relapse.

  20. Monolithic quasi-sliding-mode controller for SIDO buck converter with a self-adaptive free-wheeling current level

    International Nuclear Information System (INIS)

    Wu Xiaobo; Liu Qing; Zhao Menglian; Chen Mingyang

    2013-01-01

    An analog implementation of a novel fixed-frequency quasi-sliding-mode controller for single-inductor dual-output (SIDO) buck converter in pseudo-continuous conduction mode (PCCM) with a self-adaptive freewheeling current level (SFCL) is presented. Both small and large signal variations around the operation point are considered to achieve better transient response so as to reduce the cross-regulation of this SIDO buck converter. Moreover, an internal integral loop is added to suppress the steady-state regulation error introduced by conventional PWM-based sliding mode controllers. Instead of keeping it as a constant value, the free-wheeling current level varies according to the load condition to maintain high power efficiency and less cross-regulation at the same time. To verify the feasibility of the proposed controller, an SIDO buck converter with two regulated output voltages, 1.8 V and 3.3 V, is designed and fabricated in HEJIAN 0.35 μm CMOS process. Simulation and experiment results show that the transient time of this SIDO buck converter drops to 10 μs while the cross-regulation is reduced to 0.057 mV/mA, when its first load changes from 50 to 100 mA. (semiconductor integrated circuits)

  1. Monolithic quasi-sliding-mode controller for SIDO buck converter with a self-adaptive free-wheeling current level

    Science.gov (United States)

    Xiaobo, Wu; Qing, Liu; Menglian, Zhao; Mingyang, Chen

    2013-01-01

    An analog implementation of a novel fixed-frequency quasi-sliding-mode controller for single-inductor dual-output (SIDO) buck converter in pseudo-continuous conduction mode (PCCM) with a self-adaptive freewheeling current level (SFCL) is presented. Both small and large signal variations around the operation point are considered to achieve better transient response so as to reduce the cross-regulation of this SIDO buck converter. Moreover, an internal integral loop is added to suppress the steady-state regulation error introduced by conventional PWM-based sliding mode controllers. Instead of keeping it as a constant value, the free-wheeling current level varies according to the load condition to maintain high power efficiency and less cross-regulation at the same time. To verify the feasibility of the proposed controller, an SIDO buck converter with two regulated output voltages, 1.8 V and 3.3 V, is designed and fabricated in HEJIAN 0.35 μm CMOS process. Simulation and experiment results show that the transient time of this SIDO buck converter drops to 10 μs while the cross-regulation is reduced to 0.057 mV/mA, when its first load changes from 50 to 100 mA.

  2. Efficacy of a multidisciplinary fibromyalgia treatment adapted for women with low educational levels: a randomized controlled trial.

    Science.gov (United States)

    Castel, Antoni; Fontova, Ramon; Montull, Salvador; Periñán, Rocío; Poveda, Maria José; Miralles, Iris; Cascón-Pereira, Rosalia; Hernández, Pilar; Aragonés, Natalia; Salvat, Isabel; Castro, Sonia; Monterde, Sonia; Padrol, Anna; Sala, José; Añez, Cristóbal; Rull, Maria

    2013-03-01

    Multidisciplinary treatments of fibromyalgia (FM) have demonstrated efficacy. Nevertheless, they have been criticized for not maintaining their benefits and for not being studied for specific populations. Our objectives were to determine the efficacy of a multidisciplinary treatment for FM adapted for patients with low educational levels and to determine the maintenance of its therapeutic benefits during a long-term followup period. Inclusion criteria consisted of female sex, a diagnosis of FM (using American College of Rheumatology criteria), age between 18 and 60 years, and between 3 and 8 years of schooling. Patients were randomly assigned to 1 of the 2 treatment conditions: conventional pharmacologic treatment or multidisciplinary treatment. Outcome measures were functionality, sleep disturbances, pain intensity, catastrophizing, and psychological distress. Analysis was by intent-to-treat and missing data were replaced following the baseline observation carried forward method. One hundred fifty-five participants were recruited. No statistically significant differences regarding pretreatment measures were found between the 2 experimental groups. Overall statistics comparison showed a significant difference between the 2 groups in all of the variables studied (P educational levels is effective in reducing key symptoms of FM. Some improvements were maintained 1 year after completing the multidisciplinary treatment. Copyright © 2013 by the American College of Rheumatology.

  3. Cross-sectional association between ZIP code-level gentrification and homelessness among a large community-based sample of people who inject drugs in 19 US cities.

    Science.gov (United States)

    Linton, Sabriya L; Cooper, Hannah Lf; Kelley, Mary E; Karnes, Conny C; Ross, Zev; Wolfe, Mary E; Friedman, Samuel R; Jarlais, Don Des; Semaan, Salaam; Tempalski, Barbara; Sionean, Catlainn; DiNenno, Elizabeth; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2017-06-20

    Housing instability has been associated with poor health outcomes among people who inject drugs (PWID). This study investigates the associations of local-level housing and economic conditions with homelessness among a large sample of PWID, which is an underexplored topic to date. PWID in this cross-sectional study were recruited from 19 large cities in the USA as part of National HIV Behavioral Surveillance. PWID provided self-reported information on demographics, behaviours and life events. Homelessness was defined as residing on the street, in a shelter, in a single room occupancy hotel, or in a car or temporarily residing with friends or relatives any time in the past year. Data on county-level rental housing unaffordability and demand for assisted housing units, and ZIP code-level gentrification (eg, index of percent increases in non-Hispanic white residents, household income, gross rent from 1990 to 2009) and economic deprivation were collected from the US Census Bureau and Department of Housing and Urban Development. Multilevel models evaluated the associations of local economic and housing characteristics with homelessness. Sixty percent (5394/8992) of the participants reported homelessness in the past year. The multivariable model demonstrated that PWID living in ZIP codes with higher levels of gentrification had higher odds of homelessness in the past year (gentrification: adjusted OR=1.11, 95% CI=1.04 to 1.17). Additional research is needed to determine the mechanisms through which gentrification increases homelessness among PWID to develop appropriate community-level interventions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms [v2; ref status: indexed, http://f1000r.es/1td

    OpenAIRE

    Maxinder S Kanwal; Avinash S Ramesh; Lauren A Huang

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that ...

  5. An Agent-Based Model for Zip-Code Level Diffusion of Electric Vehicles and Electricity Consumption in New York City

    Directory of Open Access Journals (Sweden)

    Azadeh Ahkamiraad

    2018-03-01

    Full Text Available Current power grids in many countries are not fully prepared for high electric vehicle (EV penetration, and there is evidence that the construction of additional grid capacity is constantly outpaced by EV diffusion. If this situation continues, then it will compromise grid reliability and cause problems such as system overload, voltage and frequency fluctuations, and power losses. This is especially true for densely populated areas where the grid capacity is already strained with existing old infrastructure. The objective of this research is to identify the zip-code level electricity consumption that is associated with large-scale EV adoption in New York City, one of the most densely populated areas in the United States (U.S.. We fuse the Fisher and Pry diffusion model and Rogers model within the agent-based simulation to forecast zip-code level EV diffusion and the required energy capacity to satisfy the charging demand. The research outcomes will assist policy makers and grid operators in making better planning decisions on the locations and timing of investments during the transition to smarter grids and greener transportation.

  6. The help of simulation codes in designing waste assay systems using neutron measurement methods: Application to the alpha low level waste assay system PROMETHEE 6

    Energy Technology Data Exchange (ETDEWEB)

    Mariani, A.; Passard, C.; Jallu, F. E-mail: fanny.jallu@cea.fr; Toubon, H

    2003-11-01

    The design of a specific nuclear assay system for a dedicated application begins with a phase of development, which relies on information from the literature or on knowledge resulting from experience, and on specific experimental verifications. The latter ones may require experimental devices which can be restricting in terms of deadline, cost and safety. One way generally chosen to bypass these difficulties is to use simulation codes to study particular aspects. This paper deals with the potentialities offered by the simulation in the case of a passive-active neutron (PAN) assay system for alpha low level waste characterization; this system has been carried out at the Nuclear Measurements Development Laboratory of the French Atomic Energy Commission. Due to the high number of parameters to be taken into account for its development, this is a particularly sophisticated example. Since the PAN assay system, called PROMETHEE (prompt epithermal and thermal interrogation experiment), must have a detection efficiency of more than 20% and preserve a high level of modularity for various applications, an improved version has been studied using the MCNP4 (Monte Carlo N-Particle) transport code. Parameters such as the dimensions of the assay system, of the cavity and of the detection blocks, and the thicknesses of the nuclear materials of neutronic interest have been optimised. Therefore, the number of necessary experiments was reduced.

  7. The help of simulation codes in designing waste assay systems using neutron measurement methods: Application to the alpha low level waste assay system PROMETHEE 6

    International Nuclear Information System (INIS)

    Mariani, A.; Passard, C.; Jallu, F.; Toubon, H.

    2003-01-01

    The design of a specific nuclear assay system for a dedicated application begins with a phase of development, which relies on information from the literature or on knowledge resulting from experience, and on specific experimental verifications. The latter ones may require experimental devices which can be restricting in terms of deadline, cost and safety. One way generally chosen to bypass these difficulties is to use simulation codes to study particular aspects. This paper deals with the potentialities offered by the simulation in the case of a passive-active neutron (PAN) assay system for alpha low level waste characterization; this system has been carried out at the Nuclear Measurements Development Laboratory of the French Atomic Energy Commission. Due to the high number of parameters to be taken into account for its development, this is a particularly sophisticated example. Since the PAN assay system, called PROMETHEE (prompt epithermal and thermal interrogation experiment), must have a detection efficiency of more than 20% and preserve a high level of modularity for various applications, an improved version has been studied using the MCNP4 (Monte Carlo N-Particle) transport code. Parameters such as the dimensions of the assay system, of the cavity and of the detection blocks, and the thicknesses of the nuclear materials of neutronic interest have been optimised. Therefore, the number of necessary experiments was reduced

  8. Heteroresistance at the single-cell level: adapting to antibiotic stress through a population-based strategy and growth-controlled interphenotypic coordination.

    Science.gov (United States)

    Wang, Xiaorong; Kang, Yu; Luo, Chunxiong; Zhao, Tong; Liu, Lin; Jiang, Xiangdan; Fu, Rongrong; An, Shuchang; Chen, Jichao; Jiang, Ning; Ren, Lufeng; Wang, Qi; Baillie, J Kenneth; Gao, Zhancheng; Yu, Jun

    2014-02-11

    Heteroresistance refers to phenotypic heterogeneity of microbial clonal populations under antibiotic stress, and it has been thought to be an allocation of a subset of "resistant" cells for surviving in higher concentrations of antibiotic. The assumption fits the so-called bet-hedging strategy, where a bacterial population "hedges" its "bet" on different phenotypes to be selected by unpredicted environment stresses. To test this hypothesis, we constructed a heteroresistance model by introducing a blaCTX-M-14 gene (coding for a cephalosporin hydrolase) into a sensitive Escherichia coli strain. We confirmed heteroresistance in this clone and that a subset of the cells expressed more hydrolase and formed more colonies in the presence of ceftriaxone (exhibited stronger "resistance"). However, subsequent single-cell-level investigation by using a microfluidic device showed that a subset of cells with a distinguishable phenotype of slowed growth and intensified hydrolase expression emerged, and they were not positively selected but increased their proportion in the population with ascending antibiotic concentrations. Therefore, heteroresistance--the gradually decreased colony-forming capability in the presence of antibiotic--was a result of a decreased growth rate rather than of selection for resistant cells. Using a mock strain without the resistance gene, we further demonstrated the existence of two nested growth-centric feedback loops that control the expression of the hydrolase and maximize population growth in various antibiotic concentrations. In conclusion, phenotypic heterogeneity is a population-based strategy beneficial for bacterial survival and propagation through task allocation and interphenotypic collaboration, and the growth rate provides a critical control for the expression of stress-related genes and an essential mechanism in responding to environmental stresses. Heteroresistance is essentially phenotypic heterogeneity, where a population

  9. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  10. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  11. Time course of dynamic range adaptation in the auditory nerve

    Science.gov (United States)

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  12. Ozone levels in the Empty Quarter of Saudi Arabia--application of adaptive neuro-fuzzy model.

    Science.gov (United States)

    Rahman, Syed Masiur; Khondaker, A N; Khan, Rouf Ahmad

    2013-05-01

    In arid regions, primary pollutants may contribute to the increase of ozone levels and cause negative effects on biotic health. This study investigates the use of adaptive neuro-fuzzy inference system (ANFIS) for ozone prediction. The initial fuzzy inference system is developed by using fuzzy C-means (FCM) and subtractive clustering (SC) algorithms, which determines the important rules, increases generalization capability of the fuzzy inference system, reduces computational needs, and ensures speedy model development. The study area is located in the Empty Quarter of Saudi Arabia, which is considered as a source of huge potential for oil and gas field development. The developed clustering algorithm-based ANFIS model used meteorological data and derived meteorological data, along with NO and NO₂ concentrations and their transformations, as inputs. The root mean square error and Willmott's index of agreement of the FCM- and SC-based ANFIS models are 3.5 ppbv and 0.99, and 8.9 ppbv and 0.95, respectively. Based on the analysis of the performance measures and regression error characteristic curves, it is concluded that the FCM-based ANFIS model outperforms the SC-based ANFIS model.

  13. Massive weight loss decreases corticosteroid-binding globulin levels and increases free cortisol in healthy obese patients: an adaptive phenomenon?

    Science.gov (United States)

    Manco, Melania; Fernández-Real, José M; Valera-Mora, Maria E; Déchaud, Henri; Nanni, Giuseppe; Tondolo, Vincenzo; Calvani, Menotti; Castagneto, Marco; Pugeat, Michel; Mingrone, Geltrude

    2007-06-01

    Obesity, insulin resistance, and weight loss have been associated with changes in hypothalamic-pituitary-adrenal (HPA) axis. So far, no conclusive data relating to this association are available. In this study, we aim to investigate the effects of massive weight loss on cortisol suppressibility, cortisol-binding globulin (CBG), and free cortisol index (FCI) in formerly obese women. Ten glucose-normotolerant, fertile, obese women (BMI >40 kg/m2, aged 38.66 +/- 13.35 years) were studied before and 2 years after biliopancreatic diversion (BPD) when stable weight was achieved and were compared with age-matched healthy volunteers. Cortisol suppression was evaluated by a 4-mg intravenous dexamethasone suppression test (DEX-ST). FCI was calculated as the cortisol-to-CBG ratio. Insulin sensitivity was measured by an euglycemic-hyperinsulinemic clamp, and insulin secretion was measured by a C-peptide deconvolution method. No difference was found in cortisol suppression after DEX-ST before or after weight loss. A decrease in ACTH was significantly greater in control subjects than in obese (P = 0.05) and postobese women (P obese subjects, an increase of free cortisol was associated with a simultaneous decrease in CBG levels, which might be an adaptive phenomenon relating to environmental changes. This topic, not addressed before, adds new insight into the complex mechanisms linking HPA activity to obesity.

  14. High-Level Design Space and Flexibility Exploration for Adaptive, Energy-Efficient WCDMA Channel Estimation Architectures

    Directory of Open Access Journals (Sweden)

    Zoltán Endre Rákossy

    2012-01-01

    Full Text Available Due to the fast changing wireless communication standards coupled with strict performance constraints, the demand for flexible yet high-performance architectures is increasing. To tackle the flexibility requirement, software-defined radio (SDR is emerging as an obvious solution, where the underlying hardware implementation is tuned via software layers to the varied standards depending on power-performance and quality requirements leading to adaptable, cognitive radio. In this paper, we conduct a case study for representatives of two complexity classes of WCDMA channel estimation algorithms and explore the effect of flexibility on energy efficiency using different implementation options. Furthermore, we propose new design guidelines for both highly specialized architectures and highly flexible architectures using high-level synthesis, to enable the required performance and flexibility to support multiple applications. Our experiments with various design points show that the resulting architectures meet the performance constraints of WCDMA and a wide range of options are offered for tuning such architectures depending on power/performance/area constraints of SDR.

  15. Changes in sleep quality and levels of psychological distress during the adaptation to university: The role of childhood adversity.

    Science.gov (United States)

    John-Henderson, Neha A; Williams, Sarah E; Brindle, Ryan C; Ginty, Annie T

    2018-05-25

    Stress-related sleep disturbances are common, and poor sleep quality can negatively affect health. Previous work indicates that early-life adversity is associated with compromised sleep quality later in life, but it is unknown whether it predicts greater declines in sleep quality during stressful life transitions. We propose and test a conceptual model whereby individuals who reported experiencing greater levels of child maltreatment would experience greater psychological distress during a stressful life transition, which in turn would contribute to greater declines in sleep quality, relative to their quality of sleep before the stressful transition. Controlling for potential confounding variables (e.g., age, gender), structural equation modelling demonstrated that psychological distress experienced during a stressful transition (i.e., beginning life at university) mediated the relationship between childhood emotional neglect and changes in sleep quality. The hypothesized model demonstrated a good overall fit to the data, χ 2 (15) = 17.69, p = .279, CFI = .99, TLI = .97, SRMR = .04, RMSEA = .04 (90% CI quality (β = .31) during a stressful transition. Future research should aim to understand the specific stressors in the university environment that are most challenging to individuals who faced early-life emotional maltreatment. These findings will help inform interventions to facilitate adaptation to a new environment and improve sleep quality for these university students. © 2018 The British Psychological Society.

  16. Relation of knowledge and level of education to the rationality of self-medication on childhood diarrhea on the Code River banks in Jogoyudan, Jetis, Yogyakarta

    Science.gov (United States)

    Dania, H.; Ihsan, M. N.

    2017-11-01

    Self-medication as an alternative is used to reduce the severity of diarrhea. Optimal treatment can be done by increasing the rationalization of self-medication on diarrhea. This can be achieved with good knowledge about self-medication, which is in turn influenced by level of education. The aim of this study was to determine the relationship of knowledge and education level to rationality of self-medication on childhood diarrhea around the Code River in Jogoyudan, Jetis, Yogyakarta. The study was conducted by cross-sectional analytical observational design. The subjects were mothers who had children aged 2-11 years who had experienced diarrhea and had self-medication. Questionnaires were used to assess the rationality of self-medication on children's diarrhea by the parents. The respondents were askeds to fill out about indications, right drugs, doses, time intervals and periods of drug administration. Data were analyzed using chi- square. It was showed that of 40 respondents, 14 respondents (35%) performed rational self-medication on children's diarrhea and 26 respondents (65%). did not rationalize the treatment. The results of a bivariate test obtained a chi-square value of 9.808 (> 3.841) and a p value of 0.002 ( 3.841) and a p value of 0.000 (<0.05) on relationship between knowledge and rationality of self- medication. The conclusion of this study is that there is a correlation between knowledge and level of education and rationality of self-medication on childhood diarrhea on the Code River banks in Jogoyudan, Jetis, Yogyakarta.

  17. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms [v2; ref status: indexed, http://f1000r.es/1td

    Directory of Open Access Journals (Sweden)

    Maxinder S Kanwal

    2013-11-01

    Full Text Available Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks and optimization techniques (e.g. genetic algorithms. The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates.

  18. Public Health and Climate Change Adaptation at the Federal Level: One Agency’s Response to Executive Order 13514

    Science.gov (United States)

    Schramm, Paul J.; Luber, George

    2014-01-01

    Climate change will likely have adverse human health effects that require federal agency involvement in adaptation activities. In 2009, President Obama issued Executive Order 13514, Federal Leadership in Environmental, Energy, and Economic Performance. The order required federal agencies to develop and implement climate change adaptation plans. The Centers for Disease Control and Prevention (CDC), as part of a larger Department of Health and Human Services response to climate change, is developing such plans. We provide background on Executive Orders, outline tenets of climate change adaptation, discuss public health adaptation planning at both the Department of Health and Human Services and the CDC, and outline possible future CDC efforts. We also consider how these activities may be better integrated with other adaptation activities that manage emerging health threats posed by climate change. PMID:24432931

  19. Liberamente tratto da... Storie, codici, tragitti, mediazioni tra letteratura e cinema Loosely adapted from… Stories, codes, travels, mediations between literature and cinema

    Directory of Open Access Journals (Sweden)

    Donata Meneghelli

    2012-12-01

    Full Text Available

    Il saggio, a partire da una ricognizione della più recente letteratura critica sull’argomento, vuole proporre una riflessione critica sull’adattamento, sui confini che delimitano tale pratica e, in particolare, sulle questioni specifiche messe in gioco dall’adattamento cinematografico dei testi letterari, all’interno del più vasto campo dei rapporti tra letteratura e cinema. Tali questioni investono in primo luogo l’adattamento dei testi del canone, i ‘grandi’ testi della tradizione occidentale: è soprattutto in questo contesto che viene chiamata in causa la nozione di fedeltà. ‘Fedeltà’ è un termine apparentemente neutrale, che in realtà nasconde sempre un’implicita gerarchia e un atteggiamento difensivo nei confronti di una supposta superiorità assiologica della letteratura come forma ‘alta’, dotata di una dignità, di un’indipendenza creativa che mancherebbero al cinema in quanto arte popolare e di massa.

    Contro questa assiologia implicita già denunciata con grande anticipo da André Bazin, il saggio vuole mostrare la complessità e la ricchezza dei percorsi tra letteratura e cinema. Prendendo come esempio un film scarsamente conosciuto, quello che Mauro Bolognini realizza nel 1962 a partire dal romanzo Senilità, la riflessione si conclude esplorando alcune delle molteplici implicazioni rintracciabili nella nozione, proposta da Linda Hutcheon, di «identità instabile di un racconto».

    Starting from a survey of the most recent studies on the topic, the essay proposes a critical reflection on adaptation and the (possible borders of this cultural practice. It particularly focuses on the issues raised by cinematic adaptation of literary texts, a phenomenon which needs to be contextualized in the larger field of relationships between literature and cinema. Such issues, however, do not emerge in the same way for any text: they are especially urgent when we tackle the cinematic adaptation of

  20. Adaptive Management for Decision Making at the Program and Project Levels of the Missouri River Recovery Program

    Energy Technology Data Exchange (ETDEWEB)

    Thom, Ronald M.; Anderson, Michael G.; Tyre, Drew; Fleming, Craig A.

    2009-02-28

    The paper, “Adaptive Management: Background for Stakeholders in the Missouri River Recovery Program,” introduced the concept of adaptive management (AM), its principles and how they relate to one-another, how AM is applied, and challenges for its implementation. This companion paper describes how the AM principles were applied to specific management actions within the Missouri River Recovery Program to facilitate understanding, decision-making, and stakeholder engagement. For context, we begin with a brief synopsis of the Missouri River Recovery Program (MRRP) and the strategy for implementing adaptive management (AM) within the program; we finish with an example of AM in action within Phase I of the MRPP.