WorldWideScience

Sample records for maximum coding level

  1. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  2. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...

  3. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  4. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  5. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  6. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a

  7. Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast

    Directory of Open Access Journals (Sweden)

    Matsumoto Ryutaroh

    2010-01-01

    Full Text Available We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding solutions, where the maximum is over all configurations. It is shown that if coding and routing solutions are using the same transmission range, the benefit in d-dimensional networks is at least . Moreover, it is shown that if the transmission range can be optimized for routing and coding individually, the benefit in 2-dimensional networks is at least 3. Our results imply that codes following a decode-and-recombine strategy are not always optimal regarding energy efficiency.

  8. Lower Bounds on the Maximum Energy Benefit of Network Coding for Wireless Multiple Unicast

    NARCIS (Netherlands)

    Goseling, J.; Matsumoto, R.; Uyematsu, T.; Weber, J.H.

    2010-01-01

    We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding

  9. Lower bounds on the maximum energy benefit of network coding for wireless multiple unicast

    NARCIS (Netherlands)

    Goseling, Jasper; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Weber, Jos H.

    2010-01-01

    We consider the energy savings that can be obtained by employing network coding instead of plain routing in wireless multiple unicast problems. We establish lower bounds on the benefit of network coding, defined as the maximum of the ratio of the minimum energy required by routing and network coding

  10. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  11. GRABGAM: A Gamma Analysis Code for Ultra-Low-Level HPGe SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Winn, W.G.

    1999-07-28

    The GRABGAM code has been developed for analysis of ultra-low-level HPGe gamma spectra. The code employs three different size filters for the peak search, where the largest filter provides best sensitivity for identifying low-level peaks and the smallest filter has the best resolution for distinguishing peaks within a multiplet. GRABGAM basically generates an integral probability F-function for each singlet or multiplet peak analysis, bypassing the usual peak fitting analysis for a differential f-function probability model. Because F is defined by the peak data, statistical limitations for peak fitting are avoided; however, the F-function does provide generic values for peak centroid, full width at half maximum, and tail that are consistent with a Gaussian formalism. GRABGAM has successfully analyzed over 10,000 customer samples, and it interfaces with a variety of supplementary codes for deriving detector efficiencies, backgrounds, and quality checks.

  12. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  13. 40 CFR 141.62 - Maximum contaminant levels for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic contaminants. 141.62 Section 141.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.62 Maximum...

  14. 40 CFR 141.61 - Maximum contaminant levels for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for organic contaminants. 141.61 Section 141.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.61 Maximum contaminant...

  15. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  16. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  17. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    International Nuclear Information System (INIS)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user's guide for the code MAXED is included in an appendix. The code is available from the authors upon request

  18. 40 CFR 141.63 - Maximum contaminant levels (MCLs) for microbiological contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels (MCLs) for microbiological contaminants. 141.63 Section 141.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.63 Maximum...

  19. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  20. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  1. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  2. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  3. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  4. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  5. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  6. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  7. 40 CFR 141.51 - Maximum contaminant level goals for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for inorganic contaminants. 141.51 Section 141.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  8. 40 CFR 141.50 - Maximum contaminant level goals for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for organic contaminants. 141.50 Section 141.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  9. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for microbiological contaminants. 141.52 Section 141.52 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  10. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  11. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  12. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  13. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  14. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  15. Three-level grid-connected photovoltaic inverter with maximum power point tracking

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2013-01-01

    Highlight: ► This paper reports a novel 3-level grid connected photovoltaic inverter. ► The inverter features maximum power point tracking and grid current shaping. ► The inverter can be acted as an active filter and a renewable power source. - Abstract: This paper presents a systematic way of designing control scheme for a grid-connected photovoltaic (PV) inverter featuring maximum power point tracking (MPPT) and grid current shaping. Unlike conventional design, only four power switches are required to achieve three output levels and it is not necessary to use any phase-locked-loop circuitry. For the proposed scheme, a simple integral controller has been designed for the tracking of the maximum power point of a PV array based on an improved extremum seeking control method. For the grid-connected inverter, a current loop controller and a voltage loop controller have been designed. The current loop controller is designed to shape the inverter output current while the voltage loop controller can maintain the capacitor voltage at a certain level and provide a reference inverter output current for the PV inverter without affecting the maximum power point of the PV array. Experimental results are included to demonstrate the effectiveness of the tracking and control scheme.

  16. Research on coding and decoding method for digital levels

    Energy Technology Data Exchange (ETDEWEB)

    Tu Lifen; Zhong Sidong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.

  17. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  18. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.

    2001-01-01

    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained....... The paper describes the code format, the stochastic models and the resulting optimised partial safety factors....

  19. Convolutional Codes with Maximum Column Sum Rank for Network Streaming

    OpenAIRE

    Mahmood, Rafid; Badr, Ahmed; Khisti, Ashish

    2015-01-01

    The column Hamming distance of a convolutional code determines the error correction capability when streaming over a class of packet erasure channels. We introduce a metric known as the column sum rank, that parallels column Hamming distance when streaming over a network with link failures. We prove rank analogues of several known column Hamming distance properties and introduce a new family of convolutional codes that maximize the column sum rank up to the code memory. Our construction invol...

  20. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  1. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  2. Further Generalisations of Twisted Gabidulin Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John

    2017-01-01

    We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....

  3. Performance analysis of linear codes under maximum-likelihood decoding: a tutorial

    National Research Council Canada - National Science Library

    Sason, Igal; Shamai, Shlomo

    2006-01-01

    ..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...

  4. The management-retrieval code of nuclear level density sub-library (CENPL-NLD)

    International Nuclear Information System (INIS)

    Ge Zhigang; Su Zongdi; Huang Zhongfu; Dong Liaoyuan

    1995-01-01

    The management-retrieval code of the Nuclear Level Density (NLD) is presented. It contains two retrieval ways: single nucleus (SN) and neutron reaction (NR). The latter contains four kinds of retrieval types. This code not only can retrieve level density parameter and the data related to the level density, but also can calculate the relevant data by using different level density parameters and do comparison of the calculated results with related data in order to help user to select level density parameters

  5. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  6. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  7. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  8. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  9. 25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes

    Directory of Open Access Journals (Sweden)

    Książek Anna

    2016-04-01

    Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.

  10. The Influence of Red Fruit Oil on Creatin Kinase Level at Maximum Physical Activity

    Science.gov (United States)

    Apollo Sinaga, Fajar; Hotliber Purba, Pangondian

    2018-03-01

    Heavy physical activities can cause the oxidative stress which resulting in muscle damage with an indicator of elevated levels of Creatin Kinase (CK) enzyme. The oxidative stress can be prevented or reduced by antioxidant supplementation. One of natural resources which contain antioxidant is Red Fruit (Pandanus conoideus) Oil (RFO). This study aims to see the effect of Red Fruit Oil on Creatin Kinase (CK) level at maximum physical activity. This study is an experimental research by using the design of randomized control group pretest-posttest. This study was using 24 male mice divided into four groups, the control group was given aquadest, the treatment groups P1, P2, and P3 were given the RFO orally of 0.15 ml/kgBW, 0.3 ml/kgBW, and 0.6 ml/kgBW, respectively, for a month. The level of CK was checked for all groups at the beginning of study and after the maximum physical activity. The obtained data were then tested statistically by using t-test and ANOVA. The result shows the RFO supplementation during exercise decreased the CK level in P1, P2, and P3 groups with p<0.05, and the higher RFO dosage resulted in decreased CK level at p<0.05. The conclusion of this study is the Red Fruit Oil could decrease the level of CK at maximum physical activity.

  11. Improvement of level-1 PSA computer code package

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  12. Improvement of level-1 PSA computer code package

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The improvement of level-1 PSA Computer Codes' is divided into two main activities : 1) improvement of level-1 PSA methodology, 2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs

  13. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  14. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  15. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  16. Confidence level in the calculations of HCDA consequences using large codes

    International Nuclear Information System (INIS)

    Nguyen, D.H.; Wilburn, N.P.

    1979-01-01

    The probabilistic approach to nuclear reactor safety is playing an increasingly significant role. For the liquid-metal fast breeder reactor (LMFBR) in particular, the ultimate application of this approach could be to determine the probability of achieving the goal of a specific line-of-assurance (LOA). Meanwhile a more pressing problem is one of quantifying the uncertainty in a calculated consequence for hypothetical core disruptive accident (HCDA) using large codes. Such uncertainty arises from imperfect modeling of phenomenology and/or from inaccuracy in input data. A method is presented to determine the confidence level in consequences calculated by a large computer code due to the known uncertainties in input invariables. A particular application was made to the initial time of pin failure in a transient overpower HCDA calculated by the code MELT-IIIA in order to demonstrate the method. A probability distribution function (pdf) for the time of failure was first constructed, then the confidence level for predicting this failure parameter within a desired range was determined

  17. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  18. Psacoin level 1A intercomparison probabilistic system assessment code (PSAC) user group

    International Nuclear Information System (INIS)

    Nies, A.; Laurens, J.M.; Galson, D.A.; Webster, S.

    1990-01-01

    This report describes an international code intercomparison exercise conducted by the NEA Probabilistic System Assessment Code (PSAC) User Group. The PSACOIN Level 1A exercise is the third of a series designed to contribute to the verification of probabilistic codes that may be used in assessing the safety of radioactive waste disposal systems or concepts. Level 1A is based on a more realistic system model than that used in the two previous exercises, and involves deep geological disposal concepts with a relatively complex structure of the repository vault. The report compares results and draws conclusions with regard to the use of different modelling approaches and the possible importance to safety of various processes within and around a deep geological repository. In particular, the relative significance of model uncertainty and data variability is discussed

  19. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  20. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  1. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  2. Use of MICRAS code on the evaluation of the maximum radionuclides concentrations due to transport/migration of decay chain in groundwaters

    International Nuclear Information System (INIS)

    Aquino Branco, O.E. de

    1995-01-01

    This paper presents a methodology for the evaluation of the maximum radionuclides concentrations in groundwaters, due to the transport/migration of decay chains. Analytical solution of the equations system is difficult, even if only three elements of the decay chain are considered. Therefore, a numerical solution is most convenient. An application of the MICRAS code, developed to assess maximum concentrations of each radionuclide, starting with the initial concentrations, is presented. The maximum concentration profile for 226 Ra, calculated using MICRAS, is compared with the results obtained through an analytical and a numerical model. The fitness of results is considered good. Simplified models, like the one represented by the application of MICRAS, are largely employed in the section in the selection and characterization of sites for radioactive wastes repositories and in studies of safety evaluation for the same purpose. A detailed analysis of the transport/migration of contaminants in aquifers requires a large quantify of data from the site and from the installation as well, which makes this analysis expensive and inviable during the preliminary phases of the studies. (author). 6 refs, 1 fig, 1 tab

  3. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  4. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  5. Orthogonal transformations for change detection, Matlab code

    DEFF Research Database (Denmark)

    2005-01-01

    Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data.......Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data....

  6. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  7. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  8. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  9. Impact of the Level of State Tax Code Progressivity on Children's Health Outcomes

    Science.gov (United States)

    Granruth, Laura Brierton; Shields, Joseph J.

    2011-01-01

    This research study examines the impact of the level of state tax code progressivity on selected children's health outcomes. Specifically, it examines the degree to which a state's tax code ranking along the progressive-regressive continuum relates to percentage of low birthweight babies, infant and child mortality rates, and percentage of…

  10. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  11. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  12. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  13. The Calculation of Flooding Level using CFX Code

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Kim, Keon Yeop; Lee, Hyung Ho

    2015-01-01

    The plant design should consider internal flooding by postulated pipe ruptures, component failures, actuation of spray systems, and improper system alignment. The flooding causes failure of safety-related equipment and affects the integrity of the structure. The safety-related equipment should be installed above the flood level for protection against flooding effects. Conservative estimates of the flood level are important when a DBA occurs. The flooding level can be calculated simply applying Bernoulli's equation. However, in this study, a realistic calculation is performed with ANSYS CFX code. In calculation with CFX, air-core vortex phenomena, and turbulent flow can be simulated, which cannot be calculated analytically. The flooding level is evaluated by analytical calculation and CFX analysis for an assumed condition. The flood level is calculated as 0.71m and 1.1m analytically and with CFX simulation, respectively. Comparing the analytical calculation and simulation, they are similar, but the analytical calculation is not conservative. There are many factors reducing the drainage capacity such as air-core vortex, intake of air, and turbulent flow. Therefore, in case of flood level evaluation by analytical calculation, a sufficient safety margin should be considered

  14. Maximum surface level and temperature histories for Hanford waste tanks

    International Nuclear Information System (INIS)

    Flanagan, B.D.; Ha, N.D.; Huisingh, J.S.

    1994-01-01

    Radioactive defense waste resulting from the chemical processing of spent nuclear fuel has been accumulating at the Hanford Site since 1944. This waste is stored in underground waste-storage tanks. The Hanford Site Tank Farm Facilities Interim Safety Basis (ISB) provides a ready reference to the safety envelope for applicable tank farm facilities and installations. During preparation of the ISB, tank structural integrity concerns were identified as a key element in defining the safety envelope. These concerns, along with several deficiencies in the technical bases associated with the structural integrity issues and the corresponding operational limits/controls specified for conduct of normal tank farm operations are documented in the ISB. Consequently, a plan was initiated to upgrade the safety envelope technical bases by conducting Accelerated Safety Analyses-Phase 1 (ASA-Phase 1) sensitivity studies and additional structural evaluations. The purpose of this report is to facilitate the ASA-Phase 1 studies and future analyses of the single-shell tanks (SSTs) and double-shell tanks (DSTs) by compiling a quantitative summary of some of the past operating conditions the tanks have experienced during their existence. This report documents the available summaries of recorded maximum surface levels and maximum waste temperatures and references other sources for more specific data

  15. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  16. Mercure IV code application to the external dose computation from low and medium level wastes

    International Nuclear Information System (INIS)

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  17. Spectrum unfolding in X-ray spectrometry using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernandez, Jorge E.; Scot, Viviana; Di Giulio, Eugenio

    2014-01-01

    The solution of the unfolding problem is an ever-present issue in X-ray spectrometry. The maximum entropy technique solves this problem by taking advantage of some known a priori physical information and by ensuring an outcome with only positive values. This method is implemented in MAXED (MAXimum Entropy Deconvolution), a software code contained in the package UMG (Unfolding with MAXED and GRAVEL) developed at PTB and distributed by NEA Data Bank. This package contains also the code GRAVEL (used to estimate the precision of the solution). This article introduces the new code UMESTRAT (Unfolding Maximum Entropy STRATegy) which applies a semi-automatic strategy to solve the unfolding problem by using a suitable combination of MAXED and GRAVEL for applications in X-ray spectrometry. Some examples of the use of UMESTRAT are shown, demonstrating its capability to remove detector artifacts from the measured spectrum consistently with the model used for the detector response function (DRF). - Highlights: ► A new strategy to solve the unfolding problem in X-ray spectrometry is presented. ► The presented strategy uses a suitable combination of the codes MAXED and GRAVEL. ► The applied strategy provides additional information on the Detector Response Function. ► The code UMESTRAT is developed to apply this new strategy in a semi-automatic mode

  18. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  19. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    International Nuclear Information System (INIS)

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  20. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  1. QR-codes as a tool to increase physical activity level among school children during class hours

    DEFF Research Database (Denmark)

    Christensen, Jeanette Reffstrup; Kristensen, Allan; Bredahl, Thomas Viskum Gjelstrup

    the students physical activity level during class hours. Methods: A before-after study was used to examine 12 students physical activity level, measured with pedometers for six lessons. Three lessons of traditional teaching and three lessons, where QR-codes were used to make orienteering in school area...... as old fashioned. The students also felt positive about being physically active in teaching. Discussion and conclusion: QR-codes as a tool for teaching are usable for making students more physically active in teaching. The students were exited for using QR-codes and they experienced a good motivation......QR-codes as a tool to increase physical activity level among school children during class hours Introduction: Danish students are no longer fulfilling recommendations for everyday physical activity. Since August 2014, Danish students in public schools are therefore required to be physically active...

  2. Application of thin-layer Navier-Stokes equations near maximum lift

    Science.gov (United States)

    Anderson, W. K.; Thomas, J. L.; Rumsey, C. L.

    1984-01-01

    The flowfield about a NACA 0012 airfoil at a Mach number of 0.3 and Reynolds number of 1 million is computed through an angle of attack range, up to 18 deg, corresponding to conditions up to and beyond the maximum lift coefficient. Results obtained using the compressible thin-layer Navier-Stokes equations are presented as well as results from the compressible Euler equations with and without a viscous coupling procedure. The applicability of each code is assessed and many thin-layer Navier-Stokes benchmark solutions are obtained which can be used for comparison with other codes intended for use at high angles of attack. Reasonable agreement of the Navier-Stokes code with experiment and the viscous-inviscid interaction code is obtained at moderate angles of attack. An unsteady solution is obtained with the thin-layer Navier-Stokes code at the highest angle of attack considered. The maximum lift coefficient is overpredicted, however, in comparison to experimental data, which is attributed to the presence of a laminar separation bubble near the leading edge not modeled in the computations. Two comparisons with experimental data are also presented at a higher Mach number.

  3. The maximum number of minimal codewords in an [n, k]-code

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R. E. L.; de la Cruz, R.

    2013-01-01

    We survey some upper and lower bounds on the function in the title, and make them explicit for n≤15 and 1≤k≤15. Exact values are given for cycle codes of graphs for 3≤n≤15 and 1≤k≤13....

  4. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    International Nuclear Information System (INIS)

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)

  5. Computer codes for level 1 probabilistic safety assessment

    International Nuclear Information System (INIS)

    1990-06-01

    Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs

  6. Broadcasting a Common Message with Variable-Length Stop-Feedback codes

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2015-01-01

    We investigate the maximum coding rate achievable over a two-user broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder, which transmits...... itself in the absence of a square-root penalty in the asymptotic expansion of the maximum coding rate for large blocklengths, a result also known as zero dispersion. In this paper, we show that this speed-up does not necessarily occur for the broadcast channel with common message. Specifically...... continuously until it receives both stop signals. For the point-to-point case, Polyanskiy, Poor, and Verdú (2011) recently demonstrated that variable-length coding combined with stop feedback significantly increases the speed at which the maximum coding rate converges to capacity. This speed-up manifests...

  7. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, M.D. [Sandia National Labs., Albuquerque, NM (United States); Khan, M.A. [IT Corp., Albuquerque, NM (United States)

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  8. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    International Nuclear Information System (INIS)

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended

  9. New "Risk-Targeted" Seismic Maps Introduced into Building Codes

    Science.gov (United States)

    Luco, Nicholas; Garrett, B.; Hayes, J.

    2012-01-01

    Throughout most municipalities of the United States, structural engineers design new buildings using the U.S.-focused International Building Code (IBC). Updated editions of the IBC are published every 3 years. The latest edition (2012) contains new "risk-targeted maximum considered earthquake" (MCER) ground motion maps, which are enabling engineers to incorporate a more consistent and better defined level of seismic safety into their building designs.

  10. Interference Cancellation Technique Based on Discovery of Spreading Codes of Interference Signals and Maximum Correlation Detection for DS-CDMA System

    Science.gov (United States)

    Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki

    This paper presents a novel interference cancellation (IC) scheme for both synchronous and asynchronous direct-sequence code-division multiple-access (DS-CDMA) wireless channels. In the DS-CDMA system, the multiple access interference (MAI) and the near-far problem (NFP) are the two factors which reduce the capacity of the system. In this paper, we propose a new algorithm that is able to detect all interference signals as an individual MAI signal by maximum correlation detection. It is based on the discovery of all the unknowing spreading codes of the interference signals. Then, all possible MAI patterns so called replicas are generated as a summation of interference signals. And the true MAI pattern is found by taking correlation between the received signal and the replicas. Moreover, the receiver executes MAI cancellation in a successive manner, removing all interference signals by single-stage. Numerical results will show that the proposed IC strategy, which alleviates the detrimental effect of the MAI and the near-far problem, can significantly improve the system performance. Especially, we can obtain almost the same receiving characteristics as in the absense of interference for asynchrnous system when received powers are equal. Also, the same performances can be seen under any received power state for synchronous system.

  11. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    International Nuclear Information System (INIS)

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites

  12. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  13. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  14. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  15. Development of a computer code for low-and intermediate-level radioactive waste disposal safety assessment

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2002-01-01

    A safety assessment code, called SAGE (Safety Assessment Groundwater Evaluation), has been developed to describe post-closure radionuclide releases and potential radiological doses for low- and intermediate-level radioactive waste (LILW) disposal in an engineered vault facility in Korea. The conceptual model implemented in the code is focused on the release of radionuclide from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. The radionuclide transport equations are solved by spatially discretizing the disposal system into a series of compartments. Mass transfer between compartments is by diffusion/dispersion and advection. In all compartments, radionuclides are decayed either as a single-member chain or as multi-member chains. The biosphere is represented as a set of steady-state, radionuclide-specific pathway dose conversion factors that are multiplied by the appropriate release rate from the far field for each pathway. The code has the capability to treat input parameters either deterministically or probabilistically. Parameter input is achieved through a user-friendly Graphical User Interface. An application is presented, which is compared against safety assessment results from the other computer codes, to benchmark the reliability of system-level conceptual modeling of the code

  16. A Robust Cross Coding Scheme for OFDM Systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2010-01-01

    In wireless OFDM-based systems, coding jointly over all the sub-carriers simultaneously performs better than coding separately per sub-carrier. However, the joint coding is not always optimal because its achievable channel capacity (i.e. the maximum data rate) is inversely proportional to the

  17. Soft decoding a self-dual (48, 24; 12) code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  18. Maximum power point tracking techniques for wind energy systems using three levels boost converter

    Science.gov (United States)

    Tran, Cuong Hung; Nollet, Frédéric; Essounbouli, Najib; Hamzaoui, Abdelaziz

    2018-05-01

    This paper presents modeling and simulation of three level Boost DC-DC converter in Wind Energy Conversion System (WECS). Three-level Boost converter has significant advantage compared to conventional Boost. A maximum power point tracking (MPPT) method for a variable speed wind turbine using permanent magnet synchronous generator (PMSG) is also presented. Simulation of three-level Boost converter topology with Perturb and Observe algorithm and Fuzzy Logic Control is implemented in MATLAB/SIMULINK. Results of this simulation show that the system with MPPT using fuzzy logic controller has better performance to the Perturb and Observe algorithm: fast response under changing conditions and small oscillation.

  19. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  20. CESARR V.2 manual: Computer code for the evaluation of surface storage of low and medium level radioactive waste

    International Nuclear Information System (INIS)

    Moya Rivera, J.A.; Bolado Lavin, R.

    1997-01-01

    CESARR (Code for the safety evaluation of low and medium level radioactive waste storage). This code was developed for the safety probabilistic evaluations in the facilities of low-and medium level radioactive waste storage

  1. A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level

    NARCIS (Netherlands)

    Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano

    2016-01-01

    Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that

  2. An event- and network-level analysis of college students' maximum drinking day.

    Science.gov (United States)

    Meisel, Matthew K; DiBello, Angelo M; Balestrieri, Sara G; Ott, Miles Q; DiGuiseppi, Graham T; Clark, Melissa A; Barnett, Nancy P

    2018-04-01

    Heavy episodic drinking is common among college students and remains a serious public health issue. Previous event-level research among college students has examined behaviors and individual-level characteristics that drive consumption and related consequences but often ignores the social network of people with whom these heavy drinking episodes occur. The main aim of the current study was to investigate the network of social connections between drinkers on their heaviest drinking occasions. Sociocentric network methods were used to collect information from individuals in the first-year class (N=1342) at one university. Past-month drinkers (N=972) reported on the characteristics of their heaviest drinking occasion in the past month and indicated who else among their network connections was present during this occasion. Average max drinking day indegree, or the total number of times a participant was nominated as being present on another students' heaviest drinking occasion, was 2.50 (SD=2.05). Network autocorrelation models indicated that max drinking day indegree (e.g., popularity on heaviest drinking occassions) and peers' number of drinks on their own maximum drinking occasions were significantly associated with participant maximum number of drinks, after controlling for demographic variables, pregaming, and global network indegree (e.g., popularity in the entire first-year class). Being present at other peers' heaviest drinking occasions is associated with greater drinking quantities on one's own heaviest drinking occasion. These findings suggest the potential for interventions that target peer influences within close social networks of drinkers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  4. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    International Nuclear Information System (INIS)

    Takahashi, Tomoyuki; Takeda, Seiji; Kimura, Hideo

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  5. Network Coding Fundamentals and Applications

    CERN Document Server

    Medard, Muriel

    2011-01-01

    Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil

  6. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    Science.gov (United States)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  7. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  8. Orthogonal transformations for change detection, Matlab code (ENVI-like headers)

    DEFF Research Database (Denmark)

    2007-01-01

    Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files.......Matlab code to do (iteratively reweighted) multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data; accommodates ENVI (like) header files....

  9. A mean field theory of coded CDMA systems

    International Nuclear Information System (INIS)

    Yano, Toru; Tanaka, Toshiyuki; Saad, David

    2008-01-01

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems

  10. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  11. Verification of maximum impact force for interim storage cask for the Fast Flux Testing Facility

    International Nuclear Information System (INIS)

    Chen, W.W.; Chang, S.J.

    1996-01-01

    The objective of this paper is to perform an impact analysis of the Interim Storage Cask (ISC) of the Fast Flux Test Facility (FFTF) for a 4-ft end drop. The ISC is a concrete cask used to store spent nuclear fuels. The analysis is to justify the impact force calculated by General Atomics (General Atomics, 1994) using the ILMOD computer code. ILMOD determines the maximum force developed by the concrete crushing which occurs when the drop energy has been absorbed. The maximum force, multiplied by the dynamic load factor (DLF), was used to determine the maximum g-level on the cask during a 4-ft end drop accident onto the heavily reinforced FFTF Reactor Service Building's concrete surface. For the analysis, this surface was assumed to be unyielding and the cask absorbed all the drop energy. This conservative assumption simplified the modeling used to qualify the cask's structural integrity for this accident condition

  12. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  13. An overview of the geochemical code MINTEQ: Applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Peterson, S.R.; Opitz, B.E.; Graham, M.J.; Eary, L.E.

    1987-03-01

    The MINTEQ geochemical computer code, developed at the Pacific Northwest Laboratory (PNL), integrates many of the capabilities of its two immediate predecessors, MINEQL and WATEQ3. The MINTEQ code will be used in the Special Waste Form Lysimeters-Arid program to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments of the interaction of ground water with solidified low-level wastes. The code can calculate ion speciation/solubilitya, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial, solidified low-level wastes. The wastes being evaluated include power-reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code was upgraded preparatory to performing the geochemical modeling. Thermodynamic data for solid phases and aqueous species containing Sb, Ce, Cs, or Co were added to the MINTEQ database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the waste forms predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partial field validation of the geochemical model

  14. Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2016-01-01

    This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...

  15. Development of a general coupling interface for the fuel performance code TRANSURANUS – Tested with the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.; Macián-Juan, R.

    2015-01-01

    Highlights: • A general coupling interface was developed for couplings of the TRANSURANUS code. • With this new tool simplified fuel behavior models in codes can be replaced. • Applicable e.g. for several reactor types and from normal operation up to DBA. • The general coupling interface was applied to the reactor dynamics code DYN3D. • The new coupled code system DYN3D–TRANSURANUS was successfully tested for RIA. - Abstract: A general interface is presented for coupling the TRANSURANUS fuel performance code with thermal hydraulics system, sub-channel thermal hydraulics, computational fluid dynamics (CFD) or reactor dynamics codes. As first application the reactor dynamics code DYN3D was coupled at assembly level in order to describe the fuel behavior in more detail. In the coupling, DYN3D provides process time, time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, which in case of the two-way coupling approach transfers parameters like fuel temperature and cladding temperature back to DYN3D. Results of the coupled code system are presented for the reactivity transient scenario, initiated by control rod ejection. More precisely, the two-way coupling approach systematically calculates higher maximum values for the node fuel enthalpy. These differences can be explained thanks to the greater detail in fuel behavior modeling. The numerical performance for DYN3D–TRANSURANUS was proved to be fast and stable. The coupled code system can therefore improve the assessment of safety criteria, at a reasonable computational cost

  16. Sea-Level Change in the Russian Arctic Since the Last Glacial Maximum

    Science.gov (United States)

    Horton, B.; Baranskaya, A.; Khan, N.; Romanenko, F. A.

    2017-12-01

    Relative sea-level (RSL) databases that span the Last Glacial Maximum (LGM) to present have been used to infer changes in climate, regional ice sheet variations, the rate and geographic source of meltwater influx, and the rheological structure of the solid Earth. Here, we have produced a quality-controlled RSL database for the Russian Arctic since the LGM. The database contains 394 index points, which locate the position of RSL in time and space, and 244 limiting points, which constrain the minimum or maximum limit of former sea level. In the western part of the Russian Arctic (Barents and White seas,) RSL was driven by glacial isostatic adjustment (GIA) due to deglaciation of the Scandinavian ice sheet, which covered the Baltic crystalline shield at the LGM. RSL data from isolation basins show rapid RSL from 80-100 m at 11-12 ka BP to 15-25 m at 4-5 ka BP. In the Arctic Islands of Franz-Joseph Land and Novaya Zemlya, RSL data from dated driftwood in raised beaches show a gradual fall from 25-35 m at 9-10 ka BP to 5-10 m at 3 ka BP. In the Russian plain, situated at the margins of the formerly glaciated Baltic crystalline shield, RSL data from raised beaches and isolation basins show an early Holocene rise from less than -20 m at 9-11 ka BP before falling in the late Holocene, illustrating the complex interplay between ice-equivalent meltwater input and GIA. The Western Siberian Arctic (Yamal and Gydan Peninsulas, Beliy Island and islands of the Kara Sea) was not glaciated at the LGM. Sea-level data from marine and salt-marsh deposits show RSL rise at the beginning of the Holocene to a mid-Holocene highstand of 1-5 m at 5-1 ka BP. A similar, but more complex RSL pattern is shown for Eastern Siberia. RSL data from the Laptev Sea shelf show RSL at -40- -45 m and 11-14 ka BP. RSL data from the Lena Delta and Tiksi region have a highstand from 5 to 1 ka BP. The research is supported by RSF project 17-77-10130

  17. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    International Nuclear Information System (INIS)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on 'The improvement of level-1 PSA computer codes' is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author)

  18. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  19. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  20. Performance of asynchronous fiber-optic code division multiple access system based on three-dimensional wavelength/time/space codes and its link analysis.

    Science.gov (United States)

    Singh, Jaswinder

    2010-03-10

    A novel family of three-dimensional (3-D) wavelength/time/space codes for asynchronous optical code-division-multiple-access (CDMA) systems with "zero" off-peak autocorrelation and "unity" cross correlation is reported. Antipodal signaling and differential detection is employed in the system. A maximum of [(W x T+1) x W] codes are generated for unity cross correlation, where W and T are the number of wavelengths and time chips used in the code and are prime. The conditions for violation of the cross-correlation constraint are discussed. The expressions for number of generated codes are determined for various code dimensions. It is found that the maximum number of codes are generated for S systems. The codes have a code-set-size to code-size ratio greater than W/S. For instance, with a code size of 2065 (59 x 7 x 5), a total of 12,213 users can be supported, and 130 simultaneous users at a bit-error rate (BER) of 10(-9). An arrayed-waveguide-grating-based reconfigurable encoder/decoder design for 2-D implementation for the 3-D codes is presented so that the need for multiple star couplers and fiber ribbons is eliminated. The hardware requirements of the coders used for various modulation/detection schemes are given. The effect of insertion loss in the coders is shown to be significantly reduced with loss compensation by using an amplifier after encoding. An optical CDMA system for four users is simulated and the results presented show the improvement in performance with the use of loss compensation.

  1. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  2. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  3. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  4. Contribution to the study of maximum levels for liquid radioactive waste disposal into continental and sea water. Treatment of some typical samples

    International Nuclear Information System (INIS)

    Bittel, R.; Mancel, J.

    1968-10-01

    The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [fr

  5. Lake-level increasing under the climate cryoaridization conditions during the Last Glacial Maximum

    Science.gov (United States)

    Amosov, Mikhail; Strelkov, Ivan

    2017-04-01

    A lake genesis and lake-level increasing during the Last Glacial Maximum (LGM) are the paramount issues in paleoclimatology. Investigating these problems reveals the regularities of lake development and figures out an arid territory conditions at the LGM stage. Pluvial theory is the most prevalent conception of lake formation during the LGM. This theory is based on a fact that the water bodies emerged and their level increased due to torrential rainfalls. In this study, it is paid attention to an alternative assumption of lake genesis at the LGM stage, which is called climate cryoaridization. In accordance with this hypothesis, the endorheic water basins had their level enlarged because of a simultaneous climate aridity and temperature decrease. In this research, a lake-level increasing in endorheic regions of Central Asia and South American Altiplano of the Andes is described. The lake investigation is related to its conditions during the LGM. The study also includes a lake catalogue clearly presenting the basin conditions at the LGM stage and nowadays. The data compilation partly consists of information from an earlier work of Mikhail Amosov, Lake-levels, Vegetation And Climate In Central Asia During The Last Glacial Maximum (EGU2014-3015). According to the investigation, a lake catalogue on 27 lakes showed that most of the water bodies had higher level. This feature could be mentioned for the biggest lakes of the Aral Sea, Lake Balkhash, Issyk-Kul etc. and for the small ones located in the mountains, such as Pamir, Tian-Shan and Tibet. Yet some lakes that are situated in Central Asian periphery (Lake Qinghai and lakes in Inner Mongolia) used to be lower than nowadays. Also, the lake-level increasing of Altiplano turned to be a significant feature during the LGM in accordance with the data of 5 lakes, such as Titicaca, Coipasa-Uyuni, Lejia, Miscanti and Santa-Maria. Most of the current endorheic basins at the LGM stage were filled with water due to abundant

  6. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  7. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  8. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  9. Overview of the geochemical code MINTEQ: applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Graham, M.J.; Peterson, S.R.

    1985-09-01

    The MINTEQ geochemical computer code, developed at Pacific Northwest Laboratory, integrates many of the capabilities of its two immediate predecessors, WATEQ3 and MINEQL. MINTEQ can be used to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments or the interaction of ground water with solidified low-level wastes. The code is capable of performing calculations of ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial solidified low-level wastes. The wastes being evaluated include power reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code is being upgraded before the geochemical modeling is performed. Thermodynamic data for cobalt, antimony, cerium, and cesium solid phases and aqueous species are being added to the database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the wastes predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partical field validation of the geochemical model. 28 refs

  10. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  11. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  12. Economic levels of thermal resistance for house envelopes: Considerations for a national energy code

    International Nuclear Information System (INIS)

    Swinton, M.C.; Sander, D.M.

    1992-01-01

    A code for energy efficiency in new buildings is being developed by the Standing Committee on Energy Conservation in Buildings. The precursor to the new code used national average energy rates and construction costs to determine economic optimum levels of insulation, and it is believed that this resulted in prescription of sub-optimum insulation levels in any region of Canada where energy or construction costs differ significantly from the average. A new approach for determining optimum levels of thermal insulation is proposed. The analytic techniques use month-by-month energy balances of heat loss and gain; use gain load ratio correlation (GLR) for predicting the fraction of useable free heat; increase confidence in the savings predictions for above grade envelopes; can take into account solar effects on windows; and are compatible with below-grade heat loss analysis techniques in use. A sensitivity analysis was performed to determine whether reasonable variations in house characteristics would cause significant differences in savings predicted. The life cycle costing technique developed will allow the selection of thermal resistances that are commonly met by industry. Environmental energy cost multipliers can be used with the proposed methodology, which could have a minor role in encouraging the next higher level of energy efficiency. 11 refs., 6 figs., 2 tabs

  13. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  14. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  15. Coding training for medical students: How good is diagnoses coding with ICD-10 by novices?

    Directory of Open Access Journals (Sweden)

    Stausberg, Jürgen

    2005-04-01

    Full Text Available Teaching of knowledge and competence in documentation and coding is an essential part of medical education. Therefore, coding training had been placed within the course of epidemiology, medical biometry, and medical informatics. From this, we can draw conclusions about the quality of coding by novices. One hundred and eighteen students coded diagnoses from 15 nephrological cases in homework. In addition to interrater reliability, validity was calculated by comparison with a reference coding. On the level of terminal codes, 59.3% of the students' results were correct. The completeness was calculated as 58.0%. The results on the chapter level increased up to 91.5% and 87.7% respectively. For the calculation of reliability a new, simple measure was developed that leads to values of 0.46 on the level of terminal codes and 0.87 on the chapter level for interrater reliability. The figures of concordance with the reference coding are quite similar. In contrary, routine data show considerably lower results with 0.34 and 0.63 respectively. Interrater reliability and validity of coding by novices is as good as coding by experts. The missing advantage of experts could be explained by the workload of documentation and a negative attitude to coding on the one hand. On the other hand, coding in a DRG-system is handicapped by a large number of detailed coding rules, which do not end in uniform results but rather lead to wrong and random codes. Anyway, students left the course well prepared for coding.

  16. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  17. Development of SAGE, A computer code for safety assessment analyses for Korean Low-Level Radioactive Waste Disposal

    International Nuclear Information System (INIS)

    Zhou, W.; Kozak, Matthew W.; Park, Joowan; Kim, Changlak; Kang, Chulhyung

    2002-01-01

    This paper describes a computer code, called SAGE (Safety Assessment Groundwater Evaluation) to be used for evaluation of the concept for low-level waste disposal in the Republic of Korea (ROK). The conceptual model in the code is focused on releases from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. Doses can be calculated for several biosphere systems including drinking contaminated groundwater, and subsequent contamination of foods, rivers, lakes, or the ocean by that groundwater. The flexibility of the code will permit both generic analyses in support of design and site development activities, and straightforward modification to permit site-specific and design-specific safety assessments of a real facility as progress is made toward implementation of a disposal site. In addition, the code has been written to easily interface with more detailed codes for specific parts of the safety assessment. In this way, the code's capabilities can be significantly expanded as needed. The code has the capability to treat input parameters either deterministic ally or probabilistic ally. Parameter input is achieved through a user-friendly Graphical User Interface.

  18. An investigation of the maximum penetration level of a photovoltaic (PV) system into a traditional distribution grid

    Science.gov (United States)

    Chalise, Santosh

    Although solar photovoltaic (PV) systems have remained the fastest growing renewable power generating technology, variability as well as uncertainty in the output of PV plants is a significant issue. This rapid increase in PV grid-connected generation presents not only progress in clean energy but also challenges in integration with traditional electric power grids which were designed for transmission and distribution of power from central stations. Unlike conventional electric generators, PV panels do not have rotating parts and thus have no inertia. This potentially causes a problem when the solar irradiance incident upon a PV plant changes suddenly, for example, when scattered clouds pass quickly overhead. The output power of the PV plant may fluctuate nearly as rapidly as the incident irradiance. These rapid power output fluctuations may then cause voltage fluctuations, frequency fluctuations, and power quality issues. These power quality issues are more severe with increasing PV plant power output. This limits the maximum power output allowed from interconnected PV plants. Voltage regulation of a distribution system, a focus of this research, is a prime limiting factor in PV penetration levels. The IEEE 13-node test feeder, modeled and tested in the MATLAB/Simulink environment, was used as an example distribution feeder to analyze the maximum acceptable penetration of a PV plant. The effect of the PV plant's location was investigated, along with the addition of a VAR compensating device (a D-STATCOM in this case). The results were used to develop simple guidelines for determining an initial estimate of the maximum PV penetration level on a distribution feeder. For example, when no compensating devices are added to the system, a higher level of PV penetration is generally achieved by installing the PV plant close to the substation. The opposite is true when a VAR compensator is installed with the PV plant. In these cases, PV penetration levels over 50% may be

  19. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    Science.gov (United States)

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p code (odds ratio 310.0, p coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  20. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author).

  1. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author)

  2. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  3. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Tomoyuki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  4. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  5. [INVITED] Luminescent QR codes for smart labelling and sensing

    Science.gov (United States)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  6. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  7. Protograph LDPC Codes for the Erasure Channel

    Science.gov (United States)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  8. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    Science.gov (United States)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new

  9. Determination Of Maximum Power Of The RSG-Gas At Power Operation Mode Using One Line Cooling System

    International Nuclear Information System (INIS)

    Hastuti, Endiah Puji; Kuntoro, Iman; Darwis Isnaini, M.

    2000-01-01

    In the frame of minimizing the operation-cost, operation mode using one line cooling system is being evaluated. Maximum reactor power shall be determined to assure that the existing safety criteria are not violated. The analysis was done by means of a core thermal hydraulic code, COOLOD-N. The code solves core thermal hydraulic equation at steady state conditions. By varying the reactor power as the input, thermal hydraulic parameters such as fuel cladding and fuel meat temperatures as well as safety margin against flow instability were calculated. Imposing the safety criteria to the results, maximum permissible power for this operation was obtained as much as 17.1 MW. Nevertheless, for operation the maximum power is limited to 15MW

  10. Bounds on the Capacity of Weakly constrained two-dimensional Codes

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2002-01-01

    Upper and lower bounds are presented for the capacity of weakly constrained two-dimensional codes. The maximum entropy is calculated for two simple models of 2-D codes constraining the probability of neighboring 1s as an example. For given models of the coded data, upper and lower bounds...... on the capacity for 2-D channel models based on occurrences of neighboring 1s are considered....

  11. Iterative Decoding of Concatenated Codes: A Tutorial

    Directory of Open Access Journals (Sweden)

    Phillip A. Regalia

    2005-05-01

    Full Text Available The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the “turbo principle” remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a “consensus” property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  12. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    Science.gov (United States)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer

  13. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  14. Simplified method of checking the observance of maximum permissible activity of waste forms to be placed in the Konrad shaft for final waste storage

    International Nuclear Information System (INIS)

    Berg, H.P.; Piefke, F.

    1986-10-01

    The requirements to be met by waste forms destined for final storage in the Konrad shaft among others define maximum permissible activity levels which have been determined from the various parts of the safety analyses. For waste forms with very low activity levels, it is suitable to compile all the very specific requirements in one checking list, and to perform the checking as simply as adequate. On the basis of the compilation of requirements defined for normal operation of the storage facility, hypothetical accidents, thermal loads affecting the host rock, and criticality safety, the maximum permissible activities are derived that are to be checked by the simplified control measures explained. The report explains the computer programs for the ANKONA code. (orig.) [de

  15. User's guide to the biosphere code ECOS

    International Nuclear Information System (INIS)

    Kane, P.; Thorne, M.C.

    1984-10-01

    This report constitutes the user's guide to the biosphere model ECOS and provides a detailed description of the processes modelled and mathematical formulations used. The FORTRAN code ECOS is an equilibrium-type compartmental biosphere code. ECOS was designed with the objective of producing a general but comprehensive code for use in the assessment of the radiological impact of unspecified geological repositories for radioactive waste. ECOS transforms the rate of release of activity from the geosphere to the rate of accumulation of weighted committed effective dose equivalent (dose). Both maximum individual dose (critical group dose) and collective dose rates may be computed. (author)

  16. Coding of intonational meanings beyond F0

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2008-01-01

    to H+L*L-% and L*+HL-% within the autosegmental metrical (AM) model. Aspirations in early-peak contexts were characterized by (a) "short", (b) "high-intensity" noise with (c) "low" frequency values for the spectral energy maximum above the lower spectral energy boundary. The opposite holds...... for aspirations accompanying late-peak productions. Starting from the acoustic analysis, a perception experiment was performed using a variant of the semantic differential paradigm. The stimuli were varied in the duration and intensity pattern as well as the spectral energy pattern of the final /t/ aspiration......, so far solely associated with intonation contours. Hence, the traditionally separated segmental and suprasegmental coding levels seem to be more intertwined than previously thought....

  17. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  18. An in-depth study of sparse codes on abnormality detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2016-01-01

    Sparse representation has been applied successfully in abnormal event detection, in which the baseline is to learn a dictionary accompanied by sparse codes. While much emphasis is put on discriminative dictionary construction, there are no comparative studies of sparse codes regarding abnormality...... are carried out from various angles to better understand the applicability of sparse codes, including computation time, reconstruction error, sparsity, detection accuracy, and their performance combining various detection methods. The experiment results show that combining OMP codes with maximum coordinate...

  19. The Red Sea during the Last Glacial Maximum: implications for sea level reconstructions

    Science.gov (United States)

    Gildor, H.; Biton, E.; Peltier, W. R.

    2006-12-01

    The Red Sea (RS) is a semi-enclosed basin connected to the Indian Ocean via a narrow and shallow strait, and surrounded by arid areas which exhibits high sensitivity to atmospheric changes and sea level reduction. We have used the MIT GCM to investigate the changes in the hydrography and circulation in the RS in response to reduced sea level, variability in the Indian monsoons, and changes in atmospheric temperature and humidity that occurred during the Last Glacial Maximum (LGM). The model results show high sensitivity to sea level reduction especially in the salinity field (increasing with the reduction in sea level) together with a mild atmospheric impact. Sea level reduction decreases the stratification, increases subsurface temperatures, and alters the circulation pattern at the Strait of Bab el Mandab, which experiences a transition from submaximal flow to maximal flow. The reduction in sea level at LGM alters the location of deep water formation which shifts to an open sea convective site in the northern part of the RS compared to present day situation in which deep water is formed from the Gulf of Suez outflow. Our main result based on both the GCM and on a simple hydraulic control model which takes into account mixing process at the Strait of Bab El Mandeb, is that sea level was reduced by only ~100 m in the Bab El Mandeb region during the LGM, i.e. the water depth at the Hanish sill (the shallowest part in the Strait Bab el Mandab) was around 34 m. This result agrees with the recent reconstruction of the LGM low stand of the sea in this region based upon the ICE-5G (VM2) model of Peltier (2004).

  20. Maximum flood hazard assessment for OPG's deep geologic repository for low and intermediate level waste

    International Nuclear Information System (INIS)

    Nimmrichter, P.; McClintock, J.; Peng, J.; Leung, H.

    2011-01-01

    Ontario Power Generation (OPG) has entered a process to seek Environmental Assessment and licensing approvals to construct a Deep Geologic Repository (DGR) for Low and Intermediate Level Radioactive Waste (L&ILW) near the existing Western Waste Management Facility (WWMF) at the Bruce nuclear site in the Municipality of Kincardine, Ontario. In support of the design of the proposed DGR project, maximum flood stages were estimated for potential flood hazard risks associated with coastal, riverine and direct precipitation flooding. The estimation of lake/coastal flooding for the Bruce nuclear site considered potential extreme water levels in Lake Huron, storm surge and seiche, wind waves, and tsunamis. The riverine flood hazard assessment considered the Probable Maximum Flood (PMF) within the local watersheds, and within local drainage areas that will be directly impacted by the site development. A series of hydraulic models were developed, based on DGR project site grading and ditching, to assess the impact of a Probable Maximum Precipitation (PMP) occurring directly at the DGR site. Overall, this flood assessment concluded there is no potential for lake or riverine based flooding and the DGR area is not affected by tsunamis. However, it was also concluded from the results of this analysis that the PMF in proximity to the critical DGR operational areas and infrastructure would be higher than the proposed elevation of the entrance to the underground works. This paper provides an overview of the assessment of potential flood hazard risks associated with coastal, riverine and direct precipitation flooding that was completed for the DGR development. (author)

  1. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  2. Multiuser Random Coding Techniques for Mismatched Decoding

    OpenAIRE

    Scarlett, Jonathan; Martinez, Alfonso; Guillén i Fàbregas, Albert

    2016-01-01

    This paper studies multiuser random coding techniques for channel coding with a given (possibly suboptimal) decoding rule. For the mismatched discrete memoryless multiple-access channel, an error exponent is obtained that is tight with respect to the ensemble average, and positive within the interior of Lapidoth's achievable rate region. This exponent proves the ensemble tightness of the exponent of Liu and Hughes in the case of maximum-likelihood decoding. An equivalent dual form of Lapidoth...

  3. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  4. Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications

    Directory of Open Access Journals (Sweden)

    Huy Le

    2017-09-01

    Full Text Available For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR. AR seamlessly superimposes virtual objects on to a real environment in three dimensions (3D and in real time. One of the most important parts that helps close the gap between virtuality and reality is the marker used in the AR system. While pictorial marker and bar-code marker are the two most commonly used marker types in the market, they have some disadvantages in visual and processing performance. In this paper, we present a novelty method that combines the bar-code with the original feature of a colour picture (e.g., photos, trading cards, advertisement’s figure. Our method decorates on top of the original pictorial images additional features with a single stereogram image that optically conceals a multi-level (3D bar-code. Thus, it has a larger capability of storing data compared to the general 1D barcode. This new type of marker has the potential of addressing the issues that the current types of marker are facing. It not only keeps the original information of the picture but also contains encoded numeric information. In our limited evaluation, this pictorial bar-code shows a relatively robust performance under various conditions and scaling; thus, it provides a promising AR approach to be used in many applications such as trading card games, educations, and advertisements.

  5. Analysis and Construction of Full-Diversity Joint Network-LDPC Codes for Cooperative Communications

    Directory of Open Access Journals (Sweden)

    Capirone Daniele

    2010-01-01

    Full Text Available Transmit diversity is necessary in harsh environments to reduce the required transmit power for achieving a given error performance at a certain transmission rate. In networks, cooperative communication is a well-known technique to yield transmit diversity and network coding can increase the spectral efficiency. These two techniques can be combined to achieve a double diversity order for a maximum coding rate on the Multiple-Access Relay Channel (MARC, where two sources share a common relay in their transmission to the destination. However, codes have to be carefully designed to obtain the intrinsic diversity offered by the MARC. This paper presents the principles to design a family of full-diversity LDPC codes with maximum rate. Simulation of the word error rate performance of the new proposed family of LDPC codes for the MARC confirms the full diversity.

  6. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    Science.gov (United States)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  7. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  8. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2014-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  9. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    Austad, S. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Guillen, L. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McKnight, C. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ferguson, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  10. Code package to analyse behavior of the WWER fuel rods in normal operation: TOPRA's code

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.

    2001-01-01

    This paper briefly describes the code package intended for analysis of WWER fuel rod characteristics. The package includes two computer codes: TOPRA-1 and TOPRA-2 for full-scale fuel rod analyses; MRZ and MKK codes for analyzing the separate sections of fuel rods in r-z and r-j geometry. The TOPRA's codes are developed on the base of PIN-mod2 version and verified against experimental results obtained in MR, MIR and Halden research reactors (in the framework of SOFIT, FGR-2 and FUMEX experimental programs). Comparative analysis of calculation results and results from post-reactor examination of the WWER-440 and WWER-1000 fuel rod are also made as additional verification of these codes. To avoid the enlarging of uncertainties in fuel behavior prediction as a result of simplifying of the fuel geometry, MKK and MRZ codes are developed on the basis of the finite element method with use of the three nodal finite elements. Results obtained in the course of the code verification indicate the possibility for application of the method and TOPRA's code for simplified engineering calculations of WWER fuel rods thermal-physical parameters. An analysis of maximum relative errors for predicting of the fuel rod characteristics in the range of the accepted parameter values is also presented in the paper

  11. Alternate symbol inversion for improved symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Simon, M. K.; Smith, J. G.

    1980-01-01

    Inverting alternate symbols of the encoder output of a convolutionally coded system provides sufficient density of symbol transitions to guarantee adequate symbol synchronizer performance, a guarantee otherwise lacking. Although alternate symbol inversion may increase or decrease the average transition density, depending on the data source model, it produces a maximum number of contiguous symbols without transition for a particular class of convolutional codes, independent of the data source model. Further, this maximum is sufficiently small to guarantee acceptable symbol synchronizer performance for typical applications. Subsequent inversion of alternate detected symbols permits proper decoding.

  12. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  13. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr; Oseledets, I.V.

    2017-01-01

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  14. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr

    2017-10-18

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  15. Fixed capacity and variable member grouping assignment of orthogonal variable spreading factor code tree for code division multiple access networks

    Directory of Open Access Journals (Sweden)

    Vipin Balyan

    2014-08-01

    Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.

  16. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  17. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  18. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  19. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  20. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  1. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    Science.gov (United States)

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  2. Calculus of the Power Spectral Density of Ultra Wide Band Pulse Position Modulation Signals Coded with Totally Flipped Code

    Directory of Open Access Journals (Sweden)

    DURNEA, T. N.

    2009-02-01

    Full Text Available UWB-PPM systems were noted to have a power spectral density (p.s.d. consisting of a continuous portion and a line spectrum, which is composed of energy components placed at discrete frequencies. These components are the major source of interference to narrowband systems operating in the same frequency interval and deny harmless coexistence of UWB-PPM and narrowband systems. A new code denoted as Totally Flipped Code (TFC is applied to them in order to eliminate these discrete spectral components. The coded signal transports the information inside pulse position and will have the amplitude coded to generate a continuous p.s.d. We have designed the code and calculated the power spectral density of the coded signals. The power spectrum has no discrete components and its envelope is largely flat inside the bandwidth with a maximum at its center and a null at D.C. These characteristics make this code suited for implementation in the UWB systems based on PPM-type modulation as it assures a continuous spectrum and keeps PPM modulation performances.

  3. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Executive summary

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails

  4. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  5. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  6. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  7. Purifying selection acts on coding and non-coding sequences of paralogous genes in Arabidopsis thaliana.

    Science.gov (United States)

    Hoffmann, Robert D; Palmgren, Michael

    2016-06-13

    Whole-genome duplications in the ancestors of many diverse species provided the genetic material for evolutionary novelty. Several models explain the retention of paralogous genes. However, how these models are reflected in the evolution of coding and non-coding sequences of paralogous genes is unknown. Here, we analyzed the coding and non-coding sequences of paralogous genes in Arabidopsis thaliana and compared these sequences with those of orthologous genes in Arabidopsis lyrata. Paralogs with lower expression than their duplicate had more nonsynonymous substitutions, were more likely to fractionate, and exhibited less similar expression patterns with their orthologs in the other species. Also, lower-expressed genes had greater tissue specificity. Orthologous conserved non-coding sequences in the promoters, introns, and 3' untranslated regions were less abundant at lower-expressed genes compared to their higher-expressed paralogs. A gene ontology (GO) term enrichment analysis showed that paralogs with similar expression levels were enriched in GO terms related to ribosomes, whereas paralogs with different expression levels were enriched in terms associated with stress responses. Loss of conserved non-coding sequences in one gene of a paralogous gene pair correlates with reduced expression levels that are more tissue specific. Together with increased mutation rates in the coding sequences, this suggests that similar forces of purifying selection act on coding and non-coding sequences. We propose that coding and non-coding sequences evolve concurrently following gene duplication.

  8. Preliminary Validation of the MATRA-LMR Code Using Existing Sodium-Cooled Experimental Data

    International Nuclear Information System (INIS)

    Choi, Sun Rock; Kim, Sangji

    2014-01-01

    The main objective of the SFR prototype plant is to verify TRU metal fuel performance, reactor operation, and transmutation ability of high-level wastes. The core thermal-hydraulic design is used to ensure the safe fuel performance during the whole plant operation. The fuel design limit is highly dependent on both the maximum cladding temperature and the uncertainties of the design parameters. Therefore, an accurate temperature calculation in each subassembly is highly important to assure a safe and reliable operation of the reactor systems. The current core thermalhydraulic design is mainly performed using the SLTHEN (Steady-State LMR Thermal-Hydraulic Analysis Code Based on ENERGY Model) code, which has been already validated using the existing sodium-cooled experimental data. In addition to the SLTHEN code, a detailed analysis is performed using the MATRA-LMR (Multichannel Analyzer for Transient and steady-state in Rod Array-Liquid Metal Reactor) code. In this work, the MATRA-LMR code is validated for a single subassembly evaluation using the previous experimental data. The MATRA-LMR code has been validated using existing sodium-cooled experimental data. The results demonstrate that the design code appropriately predicts the temperature distributions compared with the experimental values. Major differences are observed in the experiments with the large pin number due to the radial-wise mixing difference

  9. Maximum coherent superposition state achievement using a non-resonant pulse train in non-degenerate three-level atoms

    International Nuclear Information System (INIS)

    Deng, Li; Niu, Yueping; Jin, Luling; Gong, Shangqing

    2010-01-01

    The coherent superposition state of the lower two levels in non-degenerate three-level Λ atoms is investigated using the accumulative effects of non-resonant pulse trains when the repetition period is smaller than the decay time of the upper level. First, using a rectangular pulse train, the accumulative effects are re-examined in the non-resonant two-level atoms and the modified constructive accumulation equation is analytically given. The equation shows that the relative phase and the repetition period are important in the accumulative effect. Next, under the modified equation in the non-degenerate three-level Λ atoms, we show that besides the constructive accumulation effect, the use of the partial constructive accumulation effect can also achieve the steady state of the maximum coherent superposition state of the lower two levels and the latter condition is relatively easier to manipulate. The analysis is verified by numerical calculations. The influence of the external levels in such a case is also considered and we find that it can be avoided effectively. The above analysis is also applicable to pulse trains with arbitrary envelopes.

  10. The Effects of a Maximal Power Training Cycle on the Strength, Maximum Power, Vertical Jump Height and Acceleration of High-Level 400-Meter Hurdlers

    Science.gov (United States)

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos Mª; del Campo-Vecino, Juan; Alonso-Curiel, Dionisio

    2013-01-01

    The aim of this study was to determine the effects of a power training cycle on maximum strength, maximum power, vertical jump height and acceleration in seven high-level 400-meter hurdlers subjected to a specific training program twice a week for 10 weeks. Each training session consisted of five sets of eight jump-squats with the load at which each athlete produced his maximum power. The repetition maximum in the half squat position (RM), maximum power in the jump-squat (W), a squat jump (SJ), countermovement jump (CSJ), and a 30-meter sprint from a standing position were measured before and after the training program using an accelerometer, an infra-red platform and photo-cells. The results indicated the following statistically significant improvements: a 7.9% increase in RM (Z=−2.03, p=0.021, δc=0.39), a 2.3% improvement in SJ (Z=−1.69, p=0.045, δc=0.29), a 1.43% decrease in the 30-meter sprint (Z=−1.70, p=0.044, δc=0.12), and, where maximum power was produced, a change in the RM percentage from 56 to 62% (Z=−1.75, p=0.039, δc=0.54). As such, it can be concluded that strength training with a maximum power load is an effective means of increasing strength and acceleration in high-level hurdlers. PMID:23717361

  11. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  12. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  13. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  14. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  15. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  16. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  17. Ontological function annotation of long non-coding RNAs through hierarchical multi-label classification.

    Science.gov (United States)

    Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei

    2018-05-15

    Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  18. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  19. Specification of a test problem for HYDROCOIN [Hydrologic Code Intercomparison] Level 3 Case 2: Sensitivity analysis for deep disposal in partially saturated, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.

    1987-08-01

    The international Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive waste repositories. Three principal activities in the HYDROCOIN Project are Level 1, verification and benchmarking of hydrologic codes; Level 2, validation of hydrologic models; and Level 3, sensitivity and uncertainty analyses of the models and codes. This report presents a test case defined for the HYDROCOIN Level 3 activity to explore the feasibility of applying various sensitivity-analysis methodologies to a highly nonlinear model of isothermal, partially saturated flow through fractured tuff, and to develop modeling approaches to implement the methodologies for sensitivity analysis. These analyses involve an idealized representation of a repository sited above the water table in a layered sequence of welded and nonwelded, fractured, volcanic tuffs. The analyses suggested here include one-dimensional, steady flow; one-dimensional, nonsteady flow; and two-dimensional, steady flow. Performance measures to be used to evaluate model sensitivities are also defined; the measures are related to regulatory criteria for containment of high-level radioactive waste. 14 refs., 5 figs., 4 tabs

  20. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  1. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  2. County-Level Climate Uncertainty for Risk Assessments: Volume 18 Appendix Q - Historical Maximum Near-Surface Wind Speed.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.

  3. County-Level Climate Uncertainty for Risk Assessments: Volume 4 Appendix C - Historical Maximum Near-Surface Air Temperature.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M; Walker, La Tonya Nicole; Roberts, Barry L; Malczynski, Leonard A.

    2017-06-01

    This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.

  4. Benchmark problems for radiological assessment codes. Final report

    International Nuclear Information System (INIS)

    Mills, M.; Vogt, D.; Mann, B.

    1983-09-01

    This report describes benchmark problems to test computer codes used in the radiological assessment of high-level waste repositories. The problems presented in this report will test two types of codes. The first type of code calculates the time-dependent heat generation and radionuclide inventory associated with a high-level waste package. Five problems have been specified for this code type. The second code type addressed in this report involves the calculation of radionuclide transport and dose-to-man. For these codes, a comprehensive problem and two subproblems have been designed to test the relevant capabilities of these codes for assessing a high-level waste repository setting

  5. The linear programming bound for binary linear codes

    NARCIS (Netherlands)

    Brouwer, A.E.

    1993-01-01

    Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.

  6. Study of relationship between radioactivity distribution, contamination burden and quality standard, accommodate energy of code river Yogyakarta

    International Nuclear Information System (INIS)

    Agus Taftazani and Muzakky

    2009-01-01

    Study of relationship between distribution, contamination burden of gross β radioactivity and natural radionuclide in water and sediment sample from 11 observation station Code river to quality standard and maximum capacity of Code river have been done. Natural radio nuclides identification and gross β radioactivity measurement of condensed water, dry and homogeneous sediment powder (past through 100 mesh sieve) samples have been done by using spectrometer and GM counter. Radioactivity data was analyzed descriptive with histogram to show the spreading pattern of data. Contamination burden data, quality standard and maximum capacity of river Code was to descriptive analyzed by line diagram to knowing relationship between contamination burden, quality standard, and maximum capacity of Code river. The observation of water and sediment at 11 observation station show that the emitter natural radionuclides: 210 Pb, 212 Pb, 214 Pb, 226 Ra, 208 Tl, 214 Bi, 228 Ac and 40 K were detected. The analytical result conclusion was that the pattern spread of average activity gross β and were increase from upstream to downstream of the Code river samples. Contamination burden, quality standard and maximum capacity of radionuclide activity of 210 Pb, 212 Pb, 226 Ra and 228 Ac were more smaller than quality standard of river water according to regulation of Nuclear Energy Regulatory Agency 02/Ka-BAPETEN/V-99 concerning quality standard of radioactivity. It’s mean that Code river still in good contamination burden for the four radionuclides. (author)

  7. CBP TOOLBOX VERSION 2.0: CODE INTEGRATION ENHANCEMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Flach, G.; BROWN, K.

    2013-06-01

    This report describes enhancements made to code integration aspects of the Cementitious Barriers Project (CBP) Toolbox as a result of development work performed at the Savannah River National Laboratory (SRNL) in collaboration with Vanderbilt University (VU) in the first half of fiscal year 2013. Code integration refers to the interfacing to standalone CBP partner codes, used to analyze the performance of cementitious materials, with the CBP Software Toolbox. The most significant enhancements are: 1) Improved graphical display of model results. 2) Improved error analysis and reporting. 3) Increase in the default maximum model mesh size from 301 to 501 nodes. 4) The ability to set the LeachXS/Orchestra simulation times through the GoldSim interface. These code interface enhancements have been included in a new release (Version 2.0) of the CBP Toolbox.

  8. Radiological analyses of intermediate and low level supercompacted waste drums by VQAD code

    International Nuclear Information System (INIS)

    Bace, M.; Trontl, K.; Gergeta, K.

    2004-01-01

    In order to increase the possibilities of the QAD-CGGP code, as well as to make the code more user friendly, modifications of the code have been performed. A general multisource option has been introduced into the code and a user friendly environment has been created through a Graphical User Interface. The improved version of the code has been used to calculate gamma dose rates of a single supercompacted waste drum and a pair of supercompacted waste drums. The results of the calculation were compared with the standard QAD-CGGP results. (author)

  9. Relative sea-level changes and crustal movements in Britain and Ireland since the Last Glacial Maximum

    Science.gov (United States)

    Shennan, Ian; Bradley, Sarah L.; Edwards, Robin

    2018-05-01

    The new sea-level database for Britain and Ireland contains >2100 data points from 86 regions and records relative sea-level (RSL) changes over the last 20 ka and across elevations ranging from ∼+40 to -55 m. It reveals radically different patterns of RSL as we move from regions near the centre of the Celtic ice sheet at the last glacial maximum to regions near and beyond the ice limits. Validated sea-level index points and limiting data show good agreement with the broad patterns of RSL change predicted by current glacial isostatic adjustment (GIA) models. The index points show no consistent pattern of synchronous coastal advance and retreat across different regions, ∼100-500 km scale, indicating that within-estuary processes, rather than decimetre- and centennial-scale oscillations in sea level, produce major controls on the temporal pattern of horizontal shifts in coastal sedimentary environments. Comparisons between the database and GIA model predictions for multiple regions provide potentially powerful constraints on various characteristics of global GIA models, including the magnitude of MWP1A, the final deglaciation of the Laurentide ice sheet and the continued melting of Antarctica after 7 ka BP.

  10. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  11. Duals of Affine Grassmann Codes and Their Relatives

    DEFF Research Database (Denmark)

    Beelen, P.; Ghorpade, S. R.; Hoholdt, T.

    2012-01-01

    Affine Grassmann codes are a variant of generalized Reed-Muller codes and are closely related to Grassmann codes. These codes were introduced in a recent work by Beelen Here, we consider, more generally, affine Grassmann codes of a given level. We explicitly determine the dual of an affine...... Grassmann code of any level and compute its minimum distance. Further, we ameliorate the results by Beelen concerning the automorphism group of affine Grassmann codes. Finally, we prove that affine Grassmann codes and their duals have the property that they are linear codes generated by their minimum......-weight codewords. This provides a clean analogue of a corresponding result for generalized Reed-Muller codes....

  12. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  13. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  14. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    Science.gov (United States)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  15. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  16. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  17. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  18. Towers of generalized divisible quantum codes

    Science.gov (United States)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  19. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  20. Development of standards, codes of practice and guidelines at the national level

    International Nuclear Information System (INIS)

    Swindon, T.N.

    1989-01-01

    Standards, codes of practice and guidelines are defined and their different roles in radiation protection specified. The work of the major bodies that develop such documents in Australia - the National Health and Medical Research Council and the Standards Association of Australia - is discussed. The codes of practice prepared under the Environment Protection (Nuclear Codes) Act, 1978, an act of the Australian Federal Parliament, are described and the guidelines associated with them outlined. 5 refs

  1. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  2. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  3. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  4. Global Harmonization of Maximum Residue Limits for Pesticides.

    Science.gov (United States)

    Ambrus, Árpád; Yang, Yong Zhen

    2016-01-13

    International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.

  5. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  6. The maximum ground level concentration of air pollutant and the effect of plume rise on concentration estimates

    International Nuclear Information System (INIS)

    Mayhoub, A.B.; Azzam, A.

    1991-01-01

    The emission of an air pollutant from an elevated point source according to Gaussian plume model has been presented. An elementary theoretical treatment for both the highest possible ground-level concentration and the downwind distance at which this maximum occurs for different stability classes has been constructed. The effective height release modification was taken into consideration. An illustrative case study, namely, the emission from the research reactor in Inchas, has been studied. The results of these analytical treatments and of the derived semi-empirical formulae are discussed and presented in few illustrative diagrams

  7. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  8. Simulation of the WWER-440/213 maximum credible accident at the EhNITs stand

    International Nuclear Information System (INIS)

    Blinkov, V.N.; Melikhov, O.I.; Melikhov, V.I.; Davydov, M.V.; Sokolin, A.V.; Shchepetil'nikov, Eh.Yu.

    2000-01-01

    The calculations of thermohydraulic processes through the ATHLET code for determining optimal conditions for modeling the coolant leakage at the EhNITs stand by the maximum credible accident at the NPP with WWER-440/213 reactor are presented. The diameters of the nozzle at the stand, whereby the local criterion of coincidence with the data on the NPP (by the maximum flow) and integral criterion of coincidence (by the mass and energy of the coolant, effluent during 10 s) are determined in the process of parametric calculations [ru

  9. The SWAN-SCALE code for the optimization of critical systems

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-01-01

    The SWAN optimization code was recently developed to identify the maximum value of k eff for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k eff and on the corresponding system composition

  10. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  11. Minimum and Maximum Potential Contributions to Future Sea Level Rise from Polar Ice Sheets

    Science.gov (United States)

    Deconto, R. M.; Pollard, D.

    2017-12-01

    New climate and ice-sheet modeling, calibrated to past changes in sea-level, is painting a stark picture of the future fate of the great polar ice sheets if greenhouse gas emissions continue unabated. This is especially true for Antarctica, where a substantial fraction of the ice sheet rests on bedrock more than 500-meters below sea level. Here, we explore the sensitivity of the polar ice sheets to a warming atmosphere and ocean under a range of future greenhouse gas emissions scenarios. The ice sheet-climate-ocean model used here considers time-evolving changes in surface mass balance and sub-ice oceanic melting, ice deformation, grounding line retreat on reverse-sloped bedrock (Marine Ice Sheet Instability), and newly added processes including hydrofracturing of ice shelves in response to surface meltwater and rain, and structural collapse of thick, marine-terminating ice margins with tall ice-cliff faces (Marine Ice Cliff Instability). The simulations improve on previous work by using 1) improved atmospheric forcing from a Regional Climate Model and 2) a much wider range of model physical parameters within the bounds of modern observations of ice dynamical processes (particularly calving rates) and paleo constraints on past ice-sheet response to warming. Approaches to more precisely define the climatic thresholds capable of triggering rapid and potentially irreversible ice-sheet retreat are also discussed, as is the potential for aggressive mitigation strategies like those discussed at the 2015 Paris Climate Conference (COP21) to substantially reduce the risk of extreme sea-level rise. These results, including physics that consider both ice deformation (creep) and calving (mechanical failure of marine terminating ice) expand on previously estimated limits of maximum rates of future sea level rise based solely on kinematic constraints of glacier flow. At the high end, the new results show the potential for more than 2m of global mean sea level rise by 2100

  12. EFLOD code for reflood heat transfer

    International Nuclear Information System (INIS)

    Gay, R.R.

    1979-01-01

    A computer code called EFLOD has been developed for simulation of the heat transfer and hydrodynamics of a nuclear power reactor during the reflood phase of a loss-of-coolant accident. EFLOD models the downcomer, lower plenum, core, and upper plenum of a nuclear reactor vessel using seven control volumes assuming either homogeneous or unequal-velocity, unequal-temperature (UVUT) models of two-phase flow, depending on location within the vessel. The moving control volume concept in which a single control volume models the quench region in the core and moves with the core liquid level was developed and implemented in EFLOD so that three control volumes suffice to model the core region. A simplified UVUT model that assumes saturated liquid above the quench front was developed to handle the nonhomogeneous flow situation above the quench region. An explicit finite difference routine is used to model conduction heat transfer in the fuel, gap, and cladding regions of the fuel rod. In simulation of a selected FLECHT-SET experimental run, EFLOD successfully predicted the midplane maximum temperature and turnaround time as well as the time-dependent advance of the core liquid level. However, the rate of advancement of the quench level and the ensuing liquid entrainment were overpredicted during the early part of the transient

  13. Experimental verification of the imposing principle for maximum permissible levels of multicolor laser radiation

    Directory of Open Access Journals (Sweden)

    Ivashin V.A.

    2013-12-01

    Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.

  14. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  15. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    Science.gov (United States)

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  16. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  17. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  18. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  19. Recommendation of maximum allowable noise levels for offshore wind power systems; Empfehlung von Laermschutzwerten bei der Errichtung von Offshore-Windenergieanlagen (OWEA)

    Energy Technology Data Exchange (ETDEWEB)

    Werner, Stefanie [Umweltbundesamt, Dessau-Rosslau (Germany). Fachgebiet II 2.3

    2011-05-15

    When offshore wind farms are constructed, every single pile is hammered into the sediment by a hydraulic hammer. Noise levels at Horns Reef wind farm were in the range of 235 dB. The noise may cause damage to the auditory system of marine mammals. The Federal Environmental Office therefore recommends the definition of maximum permissible noise levels. Further, care should be taken that no marine mammals are found in the immediate vicinity of the construction site. (AKB)

  20. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  1. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  2. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  3. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  4. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  5. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    Science.gov (United States)

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  6. Validation of full core geometry model of the NODAL3 code in the PWR transient Benchmark problems

    International Nuclear Information System (INIS)

    T-M Sembiring; S-Pinem; P-H Liem

    2015-01-01

    The coupled neutronic and thermal-hydraulic (T/H) code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR) ejection at peripheral core using a full core geometry model, the C1 and C2 cases. By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM) and the improved quasistatic method (IQS). All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16 % occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4 % for C2 case. All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. (author)

  7. Dosskin code for radiological evaluation of skin radioactive contaminations

    International Nuclear Information System (INIS)

    Cornejo D, N.

    1996-01-01

    The conceptual procedure and computational features of the DOSSKIN code are shown. This code calculates, in a very interactive way, skin equivalent doses and radiological risk related to skin radioactive contaminations. The evaluation takes into account the contributions of contaminant daughter nuclides and backscattering of beta particles in any skin cover. DOSSKIN also allows to estimate the maximum time needed to decontaminate the affected zone, using, as input quantity, the limit value of skin equivalent dose considered by users. The comparison of the results obtained by the DOSSKIN code with those reported by different authors are showed. The differences of results are less than 30%. (authors). 4 refs., 3 fig., 1 tab

  8. Spectral Amplitude Coding (SAC)-OCDMA Network with 8DPSK

    Science.gov (United States)

    Aldhaibani, A. O.; Aljunid, S. A.; Fadhil, Hilal A.; Anuar, M. S.

    2013-09-01

    Optical code division multiple access (OCDMA) technique is required to meet the increased demand for high speed, large capacity communications in optical networks. In this paper, the transmission performance of a spectral amplitude coding (SAC)-OCDMA network is investigated when a conventional single-mode fiber (SMF) is used as the transmission link using 8DPSK modulation. The DW has a fixed weight of two. Simulation results reveal that the transmission distance is limited mainly by the fiber dispersion when high coding chip rate is used. For a two-user SAC-OCDMA network operating with 2 Gbit/s data rate and two wavelengths for each user, the maximum allowable transmission distance is about 15 km.

  9. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  10. Verification of the CONPAS (CONtainment Performance Analysis System) code package

    International Nuclear Information System (INIS)

    Kim, See Darl; Ahn, Kwang Il; Song, Yong Man; Choi, Young; Park, Soo Yong; Kim, Dong Ha; Jin, Young Ho.

    1997-09-01

    CONPAS is a computer code package to integrate the numerical, graphical, and results-oriented aspects of Level 2 probabilistic safety assessment (PSA) for nuclear power plants under a PC window environment automatically. For the integrated analysis of Level 2 PSA, the code utilizes four distinct, but closely related modules: (1) ET Editor, (2) Computer, (3) Text Editor, and (4) Mechanistic Code Plotter. Compared with other existing computer codes for Level 2 PSA, and CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, sensitivity analysis and data interpretation, reporting aspects including tabling and graphic as well as user-friendly interface. The computational performance of CONPAS has been verified through a Level 2 PSA to a reference plant. The results of the CONPAS code was compared with an existing level 2 PSA code (NUCAP+) and the comparison proves that CONPAS is appropriate for Level 2 PSA. (author). 9 refs., 8 tabs., 14 figs

  11. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  12. Manchester Coding Option for SpaceWire: Providing Choices for System Level Design

    Science.gov (United States)

    Rakow, Glenn; Kisin, Alex

    2014-01-01

    This paper proposes an optional coding scheme for SpaceWire in lieu of the current Data Strobe scheme for three reasons. First reason is to provide a straightforward method for electrical isolation of the interface; secondly to provide ability to reduce the mass and bend radius of the SpaceWire cable; and thirdly to provide a means for a common physical layer over which multiple spacecraft onboard data link protocols could operate for a wide range of data rates. The intent is to accomplish these goals without significant change to existing SpaceWire design investments. The ability to optionally use Manchester coding in place of the current Data Strobe coding provides the ability to DC balanced the signal transitions unlike the SpaceWire Data Strobe coding; and therefore the ability to isolate the electrical interface without concern. Additionally, because the Manchester code has the clock and data encoded on the same signal, the number of wires of the existing SpaceWire cable could be optionally reduced by 50. This reduction could be an important consideration for many users of SpaceWire as indicated by the already existing effort underway by the SpaceWire working group to reduce the cable mass and bend radius by elimination of shields. However, reducing the signal count by half would provide even greater gains. It is proposed to restrict the data rate for the optional Manchester coding to a fixed data rate of 10 Megabits per second (Mbps) in order to make the necessary changes simple and still able to run in current radiation tolerant Field Programmable Gate Arrays (FPGAs). Even with this constraint, 10 Mbps will meet many applications where SpaceWire is used. These include command and control applications and many instruments applications with have moderate data rate. For most NASA flight implementations, SpaceWire designs are in rad-tolerant FPGAs, and the desire to preserve the heritage design investment is important for cost and risk considerations. The

  13. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  14. Impact of maximum TF magnetic field on performance and cost of an advanced physics tokamak

    International Nuclear Information System (INIS)

    Reid, R.L.

    1983-01-01

    Parametric studies were conducted using the Fusion Engineering Design Center (FEDC) Tokamak Systems Code to investigate the impact of variation in the maximum value of the field at the toroidal field (TF) coils on the performance and cost of a low q/sub psi/, quasi-steady-state tokamak. Marginal ignition, inductive current startup plus 100 s of inductive burn, and a constant value of epsilon (inverse aspect ratio) times beta poloidal were global conditions imposed on this study. A maximum TF field of approximately 10 T was found to be appropriate for this device

  15. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  16. General features of the neutronics design code EQUICYCLE

    International Nuclear Information System (INIS)

    Jirlow, K.

    1978-10-01

    The neutronics code EQUICYCLE has been developed and improved over a long period of time. It is expecially adapted to survey type design calculations of large fast power reactors with particular emphasis on the nuclear parameters for a realistic equilibrium fuel cycle. Thus the code is used to evaluate the breeding performance, the power distributions and the uranium and plutonium mass balance for realistic refuelling schemes. In addition reactivity coefficients can be calculated and the influence of burnup could be assessed. The code is two-dimensional and treats the reactor core in R-Z geometry. The basic ideas of the calculating scheme are successive iterative improvement of cross-section sets and flux spectra and use of the mid-cycle flux for burning the fuel according to a specified refuelling scheme. Normally given peak burn-ups and maximum power densities are used as boundary conditions. The code is capable of handling the unconventional, so called heterogeneous cores. (author)

  17. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  18. A Comparison of Athletic Movement Among Talent-Identified Juniors From Different Football Codes in Australia: Implications for Talent Development.

    Science.gov (United States)

    Woods, Carl T; Keller, Brad S; McKeown, Ian; Robertson, Sam

    2016-09-01

    Woods, CT, Keller, BS, McKeown, I, and Robertson, S. A comparison of athletic movement among talent-identified juniors from different football codes in Australia: implications for talent development. J Strength Cond Res 30(9): 2440-2445, 2016-This study aimed to compare the athletic movement skill of talent-identified (TID) junior Australian Rules football (ARF) and soccer players. The athletic movement skill of 17 TID junior ARF players (17.5-18.3 years) was compared against 17 TID junior soccer players (17.9-18.7 years). Players in both groups were members of an elite junior talent development program within their respective football codes. All players performed an athletic movement assessment that included an overhead squat, double lunge, single-leg Romanian deadlift (both movements performed on right and left legs), a push-up, and a chin-up. Each movement was scored across 3 essential assessment criteria using a 3-point scale. The total score for each movement (maximum of 9) and the overall total score (maximum of 63) were used as the criterion variables for analysis. A multivariate analysis of variance tested the main effect of football code (2 levels) on the criterion variables, whereas a 1-way analysis of variance identified where differences occurred. A significant effect was noted, with the TID junior ARF players outscoring their soccer counterparts when performing the overhead squat and push-up. No other criterions significantly differed according to the main effect. Practitioners should be aware that specific sporting requirements may incur slight differences in athletic movement skill among TID juniors from different football codes. However, given the low athletic movement skill noted in both football codes, developmental coaches should address the underlying movement skill capabilities of juniors when prescribing physical training in both codes.

  19. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  1. The PSACOIN level 1B exercise: A probabilistic code intercomparison involving a four compartment biosphere model

    International Nuclear Information System (INIS)

    Klos, R.A.; Sinclair, J.E.; Torres, C.; Mobbs, S.F.; Galson, D.A.

    1991-01-01

    The probabilistic Systems Assessment Code (PSAC) User Group of the OECD Nuclear Energy Agency has organised a series of code intercomparison studies of relevance to the performance assessment of underground repositories for radioactive wastes - known collectively by the name PSACOIN. The latest of these to be undertaken is designated PSACOIN Level 1b, and the case specification provides a complete assessment model of the behaviour of radionuclides following release into the biosphere. PSACOIN Level 1b differs from other biosphere oriented intercomparison exercises in that individual dose is the end point of the calculations as opposed to any other intermediate quantity. The PSACOIN Level 1b case specification describes a simple source term which is used to simulate the release of activity to the biosphere from certain types of near surface waste repository, the transport of radionuclides through the biosphere and their eventual uptake by humankind. The biosphere sub model comprises 4 compartments representing top and deep soil layers, river water and river sediment. The transport of radionuclides between the physical compartments is described by ten transfer coefficients and doses to humankind arise from the simultaneous consumption of water, fish, meat, milk, and grain as well as from dust inhalation and external γ-irradiation. The parameters of the exposure pathway sub model are chosen to be representative of an individual living in a small agrarian community. (13 refs., 3 figs., 2 tabs.)

  2. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  3. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  4. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV

    Directory of Open Access Journals (Sweden)

    Y. Nasser

    2008-01-01

    Full Text Available Single-frequency networks (SFNs for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR mapping (EESM to predict the bit error rate (BER at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.

  5. Suitability of the charm HVS and a microbiological multiplate system for detection of residues in raw milk at EU maximum residue levels

    NARCIS (Netherlands)

    Nouws, J.F.M.; Egmond, van H.; Loeffen, G.; Schouten, J.; Keukens, H.; Smulders, I.; Stegeman, H.

    1999-01-01

    In this paper we assessed the suitability of the Charm HVS and a newly developed microbiological multiplate system as post-screening tests to confirm the presence of residues in raw milk at or near the maximum permissible residue level (MRL). The multiplate system is composed of Bacillus

  6. Providing thermal-hydraulic boundary conditions to the reactor code TINTE through a Flownex-TINTE coupling - HTR2008-58110

    International Nuclear Information System (INIS)

    Marais, D.; Greyvenstein, G. P.

    2008-01-01

    TINTE is a well established reactor analysis code which models the transient behaviour of pebble bed reactor cores but it does not include the capabilities to model a power conversion unit (PCU). This raises the issue that TINTE cannot model full system transients. One way to overcome this problem is to supply TINTE with time-dependant thermal-hydraulic boundary conditions which are obtained from PCU simulations. This study investigates a method to provide boundary conditions for the nuclear code TINTE during full system transients. This was accomplished by creating a high level interface between the systems CFD code Flownex and TINTE. An indirect coupling method is explored whereby characteristics of the PCU are matched to characteristics of the nuclear core. This method eliminates the need to iterate between the two codes. A number of transients are simulated using the coupled code and then compared against stand-alone Flownex simulations. The coupling method introduces relatively small errors when reproducing mass flow, temperature and pressure in steady state analysis, but become more pronounced when dealing with fast thermal-hydraulic transients. Decreasing the maximum time step length of TINTE reduces this problem, but increases the computational time. Copyright ASME 2008. (authors)

  7. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  8. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  9. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  10. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  11. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  12. Short binary convolutional codes with maximal free distance for rates 2/3 and 3/4

    DEFF Research Database (Denmark)

    Paaske, Erik

    1974-01-01

    . Farther, the search among the remaining codes is started in a subset in which we expect the possibility of finding codes with large values ofd_{free}to be good. A number of short, optimum (in the sense of maximizingd_{free}), rate-2/3 and 3/4 codes found by the search procedure are listed.......A search procedure is developed to find good short binary(N,N - 1)convolutional codes. It uses simple rules to discard from the complete ensemble of codes a large fraction whose free distanced_{free}either cannot achieve the maximum value or is equal tod_{free}of some code in the remaining set...

  13. Fuel rod computations. The COMETHE code in its CEA version

    International Nuclear Information System (INIS)

    Lenepveu, Dominique.

    1976-01-01

    The COMETHE code (COde d'evolution MEcanique et THermique) is intended for computing the irradiation behavior of water reactor fuel pins. It is concerned with steadily operated cylindrical pins, containing fuel pellet stacks (UO 2 or PuO 2 ). The pin consists in five different axial zones: two expansion chambers, two blankets, and a central core that may be divided into several stacks parted by plugs. As far as computation is concerned, the pin is divided into slices (maximum 15) in turn divided into rings (maximum 50). Information are obtained for each slice: the radial temperature distribution, heat transfer coefficients, thermal flux at the pin surface, changes in geometry according to temperature conditions, and specific burn-up. The physical models involved take account for: heat transfer, fission gas release, fuel expansion, and creep of the can. Results computed with COMETHE are compared with those from ELP and EPEL irradiation experiments [fr

  14. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  15. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  16. Level 1 - level 2 interface

    International Nuclear Information System (INIS)

    Boneham, P.

    2003-01-01

    The Plant Damage States (PDS) are the starting point for the level 2 analysis. A PDS is group of core damage sequences that are expected to have similar severe accident progressions. In this paper an overview of Level 1/Level 2 interface, example PDS parameters, example PDS definitions using codes and example Bridge Tree are presented. PDS frequency calculation (identification of sequences for each PDS in level 1,split some CD sequences which have different level 2 progressions), code calculations providing support for grouping decisions and timings as well as PDS frequencies and definitions input to level 2 are also discussed

  17. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  18. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  19. Development of Fuel ROd Behavior Analysis code (FROBA) and its application to AP1000

    International Nuclear Information System (INIS)

    Yu, Hongxing; Tian, Wenxi; Yang, Zhen; SU, G.H.; Qiu, Suizheng

    2012-01-01

    Highlights: ► A Fuel ROd Behavior Analysis code (FROBA) has been developed. ► The effects irradiation and burnup has been considered in FROBA. ► The comparison with INL’s results shows a good agreement. ► The FROBA code was applied to AP1000. ► Peak fuel temperature, gap width, hoop strain, etc. were obtained. -- Abstract: The reliable prediction of nuclear fuel rod behavior is of great importance for safety evaluation of nuclear reactors. In the present study, a thermo-mechanical coupling code FROBA (Fuel ROd Behavior Analysis) has been independently developed with consideration of irradiation and burnup effects. The thermodynamic, geometrical and mechanical behaviors have been predicted and were compared with the results obtained by Idaho National Laboratory to validate the reliability and accuracy of the FROBA code. The validated code was applied to analyze the fuel behavior of AP1000 at different burnup levels. The thermal results show that the predicted peak fuel temperature experiences three stages in the fuel lifetime. The mechanical results indicate that hoop strain at high power is greater than that at low power, which means that gap closure phenomenon will occur earlier at high power rates. The maximum cladding stress meets the requirement of yield strength limitation in the entire fuel lifetime. All results show that there are enough safety margins for fuel rod behavior of AP1000 at rated operation conditions. The FROBA code is expected to be applied to deal with more complicated fuel rod scenarios after some modifications.

  20. Ion energy loss at maximum stopping power in a laser-generated plasma

    International Nuclear Information System (INIS)

    Cayzac, W.

    2013-01-01

    In the frame of this thesis, a new experimental setup for the measurement of the energy loss of carbon ions at maximum stopping power in a hot laser-generated plasma has been developed and successfully tested. In this parameter range where the projectile velocity is of the same order of magnitude as the thermal velocity of the plasma free electrons, large uncertainties of up to 50% are present in the stopping-power description. To date, no experimental data are available to perform a theory benchmarking. Testing the different stopping theories is yet essential for inertial confinement fusion and in particular for the understanding of the alpha-particle heating of the thermonuclear fuel. Here, for the first time, precise measurements were carried out in a reproducible and entirely characterized beam-plasma configuration. It involved a nearly fully-stripped ion beam probing a homogeneous fully-ionized plasma. This plasma was generated by irradiating a thin carbon foil with two high-energy laser beams and features a maximum electron temperature of 200 eV. The plasma conditions were simulated with a two-dimensional radiative hydrodynamic code, while the ion-beam charge-state distribution was predicted by means of a Monte-Carlo code describing the charge-exchange processes of projectile ions in plasma. To probe at maximum stopping power, high-frequency pulsed ion bunches were decelerated to an energy of 0.5 MeV per nucleon. The ion energy loss was determined by a time-of-flight measurement using a specifically developed chemical-vapor-deposition diamond detector that was screened against any plasma radiation. A first experimental campaign was carried out using this newly developed platform, in which a precision better than 200 keV on the energy loss was reached. This allowed, via the knowledge of the plasma and of the beam parameters, to reliably test several stopping theories, either based on perturbation theory or on a nonlinear T-Matrix formalism. A preliminary

  1. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    Science.gov (United States)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  2. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  3. Cooperation of experts' opinion, experiment and computer code development

    International Nuclear Information System (INIS)

    Wolfert, K.; Hicken, E.

    The connection between code development, code assessment and confidence in the analysis of transients will be discussed. In this manner, the major sources of errors in the codes and errors in applications of the codes will be shown. Standard problem results emphasize that, in order to have confidence in licensing statements, the codes must be physically realistic and the code user must be qualified and experienced. We will discuss why there is disagreement between the licensing authority and vendor concerning assessment of the fullfillment of safety goal requirements. The answer to the question lies in the different confidence levels of the assessment of transient analysis. It is expected that a decrease in the disagreement will result from an increased confidence level. Strong efforts will be made to increase this confidence level through improvements in the codes, experiments and related organizational strcutures. Because of the low probability for loss-of-coolant-accidents in the nuclear industry, assessment must rely on analytical techniques and experimental investigations. (orig./HP) [de

  4. Performance Comparison of Containment PT analysis between CAP and CONTEMPT Code

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Yeon Jun; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul [FNC Tech., Seoul (Korea, Republic of); Ha, Sang Jun; Choi, Hoon [KHNP-CENTERAL RESEARCH INSTITUTE, Daejeon (Korea, Republic of)

    2013-10-15

    CAP, in the form that is linked with SPACE, computed the containment back-pressure during LOCA accident. In previous SAR (safety analysis report) report of Shin-Kori Units 3 and 4, the CONTEMPT series of codes(hereby referred to as just 'CONTEMPT') is used to evaluate the containment safety during the postulated loss-of-coolant accident (LOCA). In more detail, CONTEMPT-LT/028 was used to calculate the containment maximum PT, while CONTEMPT4/MOD5 to calculate the minimum PT. Actually, in minimum PT analysis, CONTEMPT4/MOD5, which provide back pressure condition of containment, was linked with RELAP5/MOD3.3 which calculate the amount of blowdown into containment. In this analysis, CONTEMPT4/MOD5 was modified based on KREM. CONTEMPT code was developed to predict the long term behavior of water-cooled nuclear reactor containment systems subjected to LOCA conditions. It calculates the time variation of compartment pressures, temperatures, mass and energy inventories, heat structure temperature distributions, and energy exchange with adjacent compartments, leakage on containment response. Models are provided for fan cooler and cooling spray as engineered safety systems. Any compartment may have both a liquid pool region and an air-vapor atmosphere region above the pool. Each region is assumed to have a uniform temperature, but the temperatures of the two regions may be different. As mentioned above, CONTEMP has the similar code features and it therefore is expected to show the similar analysis performance with CAP. In this study, the differences between CAP and two CONTEMPT code versions (CONTEMPT-LT/028 for maximum PT and CONTEMPT4/MOD5 for minimum PT) are, in detail, identified and the code performances were compared for the same problem. Code by code comparison was carried out to identify the difference of LOCA analysis between a series of COMTEMPT and CAP code. With regard to important factors that affect the transient behavior of compartment thermodynamic

  5. Performance Comparison of Containment PT analysis between CAP and CONTEMPT Code

    International Nuclear Information System (INIS)

    Choo, Yeon Jun; Hong, Soon Joon; Hwang, Su Hyun; Kim, Min Ki; Lee, Byung Chul; Ha, Sang Jun; Choi, Hoon

    2013-01-01

    CAP, in the form that is linked with SPACE, computed the containment back-pressure during LOCA accident. In previous SAR (safety analysis report) report of Shin-Kori Units 3 and 4, the CONTEMPT series of codes(hereby referred to as just 'CONTEMPT') is used to evaluate the containment safety during the postulated loss-of-coolant accident (LOCA). In more detail, CONTEMPT-LT/028 was used to calculate the containment maximum PT, while CONTEMPT4/MOD5 to calculate the minimum PT. Actually, in minimum PT analysis, CONTEMPT4/MOD5, which provide back pressure condition of containment, was linked with RELAP5/MOD3.3 which calculate the amount of blowdown into containment. In this analysis, CONTEMPT4/MOD5 was modified based on KREM. CONTEMPT code was developed to predict the long term behavior of water-cooled nuclear reactor containment systems subjected to LOCA conditions. It calculates the time variation of compartment pressures, temperatures, mass and energy inventories, heat structure temperature distributions, and energy exchange with adjacent compartments, leakage on containment response. Models are provided for fan cooler and cooling spray as engineered safety systems. Any compartment may have both a liquid pool region and an air-vapor atmosphere region above the pool. Each region is assumed to have a uniform temperature, but the temperatures of the two regions may be different. As mentioned above, CONTEMP has the similar code features and it therefore is expected to show the similar analysis performance with CAP. In this study, the differences between CAP and two CONTEMPT code versions (CONTEMPT-LT/028 for maximum PT and CONTEMPT4/MOD5 for minimum PT) are, in detail, identified and the code performances were compared for the same problem. Code by code comparison was carried out to identify the difference of LOCA analysis between a series of COMTEMPT and CAP code. With regard to important factors that affect the transient behavior of compartment thermodynamic state in

  6. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  7. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  8. Guidance document on the derivation of maximum permissible risk levels for human intake of soil contaminants

    NARCIS (Netherlands)

    Janssen PJCM; Speijers GJA; CSR

    1997-01-01

    This report contains a basic step-to-step description of the procedure followed in the derivation of the human-toxicological Maximum Permissible Risk (MPR ; in Dutch: Maximum Toelaatbaar Risico, MTR) for soil contaminants. In recent years this method has been applied for a large number of compounds

  9. Code Calibration as a Decision Problem

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1993-01-01

    Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....

  10. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  11. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  12. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    International Nuclear Information System (INIS)

    Strenge, D.L.; Peloquin, R.A.

    1981-04-01

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested

  13. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  14. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  15. The consequences of a reduction in the administratively applied maximum annual dose equivalent level for an individual in a group of occupationally exposed workers

    International Nuclear Information System (INIS)

    Harrison, N.T.

    1980-02-01

    An analysis is described for predicting the consequences of a reduction in the administratively applied maximum dose equivalent level to individuals in a group of workers occupationally exposed to ionising radiations, for the situation in which no changes are made to the working environment. This limitation of the maximum individual dose equivalent is accommodated by allowing the number of individuals in the working group to increase. The derivation of the analysis is given, together with worked examples, which highlight the important assumptions that have been made and the conclusions that can be drawn. The results are obtained in the form of the capacity of the particular working environment to accommodate the limitation of the maximum individual dose equivalent, the increase in the number of workers required to carry out the productive work and any consequent increase in the occupational collective dose equivalent. (author)

  16. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  17. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  18. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  19. An Efficient Platform for the Automatic Extraction of Patterns in Native Code

    Directory of Open Access Journals (Sweden)

    Javier Escalada

    2017-01-01

    Full Text Available Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.

  20. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 2. Special test cases

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-08-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. Volume 1, titled ''Guideline Approach,'' consists of Chapters 1 through 5 and a glossary. Chapters 2 through 5 provide the more detailed discussions about the code selection approach. This volume, Volume 2, consists of four appendices reporting on the technical evaluation test cases designed to help verify the accuracy of ground-water transport codes. 20 refs

  1. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  2. 40 CFR Table 2 to Subpart Ggggg of... - Control Levels as Required by § 63.7895(a) for Tanks Managing Remediation Material With a Maximum...

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 13 2010-07-01 2010-07-01 false Control Levels as Required by § 63.7895(a) for Tanks Managing Remediation Material With a Maximum HAP Vapor Pressure Less Than 76.6 kPa 2..., Subpt. GGGGG, Table 2 Table 2 to Subpart GGGGG of Part 63—Control Levels as Required by § 63.7895(a) for...

  3. LISA. A code for safety assessment in nuclear waste disposals program description and user guide

    International Nuclear Information System (INIS)

    Saltelli, A.; Bertozzi, G.; Stanners, D.A.

    1984-01-01

    The code LISA (Long term Isolation Safety Assessment), developed at the Joint Research Centre, Ispra is a useful tool in the analysis of the hazard due to the disposal of nuclear waste in geological formations. The risk linked to preestablished release scenarios is assessed by the code in terms of dose rate to a maximum exposed individual. The various submodels in the code simulate the system of barriers -both natural and man made- which are interposed between the contaminants and man. After a description of the code features a guide for the user is supplied and then a test case is presented

  4. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  5. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  6. Hominoid-specific de novo protein-coding genes originating from long non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Chen Xie

    2012-09-01

    Full Text Available Tinkering with pre-existing genes has long been known as a major way to create new genes. Recently, however, motherless protein-coding genes have been found to have emerged de novo from ancestral non-coding DNAs. How these genes originated is not well addressed to date. Here we identified 24 hominoid-specific de novo protein-coding genes with precise origination timing in vertebrate phylogeny. Strand-specific RNA-Seq analyses were performed in five rhesus macaque tissues (liver, prefrontal cortex, skeletal muscle, adipose, and testis, which were then integrated with public transcriptome data from human, chimpanzee, and rhesus macaque. On the basis of comparing the RNA expression profiles in the three species, we found that most of the hominoid-specific de novo protein-coding genes encoded polyadenylated non-coding RNAs in rhesus macaque or chimpanzee with a similar transcript structure and correlated tissue expression profile. According to the rule of parsimony, the majority of these hominoid-specific de novo protein-coding genes appear to have acquired a regulated transcript structure and expression profile before acquiring coding potential. Interestingly, although the expression profile was largely correlated, the coding genes in human often showed higher transcriptional abundance than their non-coding counterparts in rhesus macaque. The major findings we report in this manuscript are robust and insensitive to the parameters used in the identification and analysis of de novo genes. Our results suggest that at least a portion of long non-coding RNAs, especially those with active and regulated transcription, may serve as a birth pool for protein-coding genes, which are then further optimized at the transcriptional level.

  7. The SWAN coupling code: user's guide

    International Nuclear Information System (INIS)

    Litaudon, X.; Moreau, D.

    1988-11-01

    Coupling of slow waves in a plasma near the lower hybrid frequency is well known and linear theory with density step followed by a constant gradient can be used with some confidence. With the aid of the computer code SWAN, which stands for 'Slow Wave Antenna', the following parameters can be numerically calculated: n parallel power spectrum, directivity (weighted by the current drive efficiency), reflection coefficients (amplitude and phase) both before and after the E-plane junctions, scattering matrix at the plasma interface, scattering matrix at the E-plane junctions, maximum electric fields in secondary waveguides and location where it occurs, effect of passive waveguides on each side of the antenna, and the effect of a finite magnetic field in front of the antenna (for homogeneous plasma). This manual gives the basic information on the main assumptions of the coupling theory and on the use and general structure of the code itself. It answers the questions what are the main assumptions of the physical model? how to execute a job? what are the input parameters of the code? and what are the output results and where are they written? (author)

  8. An Agent-Based Model for Zip-Code Level Diffusion of Electric Vehicles and Electricity Consumption in New York City

    Directory of Open Access Journals (Sweden)

    Azadeh Ahkamiraad

    2018-03-01

    Full Text Available Current power grids in many countries are not fully prepared for high electric vehicle (EV penetration, and there is evidence that the construction of additional grid capacity is constantly outpaced by EV diffusion. If this situation continues, then it will compromise grid reliability and cause problems such as system overload, voltage and frequency fluctuations, and power losses. This is especially true for densely populated areas where the grid capacity is already strained with existing old infrastructure. The objective of this research is to identify the zip-code level electricity consumption that is associated with large-scale EV adoption in New York City, one of the most densely populated areas in the United States (U.S.. We fuse the Fisher and Pry diffusion model and Rogers model within the agent-based simulation to forecast zip-code level EV diffusion and the required energy capacity to satisfy the charging demand. The research outcomes will assist policy makers and grid operators in making better planning decisions on the locations and timing of investments during the transition to smarter grids and greener transportation.

  9. TMRBAR power balance code for tandem mirror reactors

    International Nuclear Information System (INIS)

    Blackkfield, D.T.; Campbell, R.; Fenstermacher, M.; Bulmer, R.; Perkins, L.; Peng, Y.K.M.; Reid, R.L.; Wu, K.F.

    1984-01-01

    A revised version of the tandem mirror multi-point code TMRBAR developed at LLNL has been used to examine various reactor designs using MARS-like ''c'' coils. We solve 14 to 16 non-linear equations to obtain the densities, temperatures, plasma potential and magnetic field on axis at the cardinal points. Since ICRH, ECRH, and neutral beams may be used to stabilize the central cell, various combinations of rf and neutral beam powers may satisfy the physics. To select a desired set of physics parameters, we use nonlinear optimization techniques. Whit these routines, we minimize or maximize a physics variable subject to the physics constraints being satisfied. For example, for a given fusion power we may find the minimum length needed to have an ignited central cell or the maximum fusion Q. Finally, we have coupled this physics model to the LLNL magnetics-MHD code. This code runs the EFFI magnetic field generator and uses TEBASCO to calculate 1-D MHD equilibria and stability

  10. Computer codes to assess risks from nuclear power plants with LWR's

    International Nuclear Information System (INIS)

    Alonso, A.; Blanco, J.; Francia, L.; Gallego, E.; Morales, L.; Ortega, P.; Torres, C.

    1986-01-01

    The codes used to quantify risks from nuclear power plants are described. For QRA level 1 (quantitative risk assessment) qualitative and quantitative codes are described. Codes to estimate uncertainties, importance and dependent failures are also included. For QRA-level 2, the most important codes dealing with thermohydraulics, molten core and aerosols behaviour are described. For QRA-level 3 the list includes integrated as well as separate models. Only light water reactors are considered. The presentation is general but the authors describe with more detail those codes they are more familiar with or the ones they have created through their research effort. (author)

  11. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  12. How to Crack the Sugar Code.

    Science.gov (United States)

    Gabius, H-J

    2017-01-01

    The known ubiquitous presence of glycans fulfils an essential prerequisite for fundamental roles in cell sociology. Since carbohydrates are chemically predestined to form biochemical messages of a maximum of structural diversity in a minimum of space, coding of biological information by sugars is the reason for the broad occurrence of cellular glycoconjugates. Their glycans originate from sophisticated enzymatic assembly and dynamically adaptable remodelling. These signals are read and translated into effects by receptors (lectins). The functional pairing between lectins and their counterreceptor(s) is highly specific, often orchestrated by intimate co-regulation of the receptor, the cognate glycan and the bioactive scaffold (e.g., an integrin). Bottom-up approaches, teaming up synthetic and supramolecular chemistry to prepare fully programmable nanoparticles as binding partners with systematic network analysis of lectins and rational design of variants, enable us to delineate the rules of the sugar code.

  13. Dose assessment around TR-2 reactor due to maximum credible accident

    International Nuclear Information System (INIS)

    Turgut, M. H.; Adalioglu, U.; Aytekin, A.

    2001-01-01

    The revision of safety analysis report of TR-2 research reactor had been initiated in 1995. The whole accident analysis and accepted scenario for maximum credible accident has been revised according to the new safety concepts and the impact to be given to the environment due to this scenario has been assessed. This paper comprises all results of these calculations. The accepted maximum credible accident scenario is the partial blockage of the whole reactor core which resulted in the release of 25% of the core inventory. The DOSER code which uses very conservative modelling of atmospheric distributions were modified for the assessment calculations. Pasquill conditions based on the local weather observations, topography, and building affects were considered. The thyroid and whole body doses for 16 sectors and up to 10 km of distance around CNAEM were obtained. Release models were puff and a prolonged one of two hours of duration. Release fractions for the active isotopes were chosen from literature which were realistic

  14. International outage coding system for nuclear power plants. Results of a co-ordinated research project

    International Nuclear Information System (INIS)

    2004-05-01

    The experience obtained in each individual plant constitutes the most relevant source of information for improving its performance. However, experience of the level of the utility, country and worldwide is also extremely valuable, because there are limitations to what can be learned from in-house experience. But learning from the experience of others is admittedly difficult, if the information is not harmonized. Therefore, such systems should be standardized and applicable to all types of reactors satisfying the needs of the broad set of nuclear power plant operators worldwide and allowing experience to be shared internationally. To cope with the considerable amount of information gathered from nuclear power plants worldwide, it is necessary to codify the information facilitating the identification of causes of outages, systems or component failures. Therefore, the IAEA established a sponsored Co-ordinated Research Project (CRP) on the International Outage Coding System to develop a general, internationally applicable system of coding nuclear power plant outages, providing worldwide nuclear utilities with a standardized tool for reporting outage information. This TECDOC summarizes the results of this CRP and provides information for transformation of the historical outage data into the new coding system, taking into consideration the existing systems for coding nuclear power plant events (WANO, IAEA-IRS and IAEA PRIS) but avoiding duplication of efforts to the maximum possible extent

  15. Application of the French codes to the pressurized thermal shocks assessment

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Mingya; Wang, Rong Shan; Yu, Weiwei; Lu, Feng; Zhang, Guo Dong; Xue, Fei; Chen, Zhilin [Suzhou Nuclear Power Research Institute, Life Management Center, Suzhou (China); Qian, Guian [Paul Scherrer Institute, Nuclear Energy and Safety Department, Villigen (Switzerland); Shi, Jinhua [Amec Foster Wheeler, Clean Energy Department, Gloucester (United Kingdom)

    2016-12-15

    The integrity of a reactor pressure vessel (RPV) related to pressurized thermal shocks (PTSs) has been extensively studied. This paper introduces an integrity assessment of an RPV subjected to a PTS transient based on the French codes. In the USA, the 'screening criterion' for maximum allowable embrittlement of RPV material is developed based on the probabilistic fracture mechanics. However, in the French RCC-M and RSE-M codes, which are developed based on the deterministic fracture mechanics, there is no 'screening criterion'. In this paper, the methodology in the RCC-M and RSE-M codes, which are used for PTS analysis, are firstly discussed. The bases of the French codes are compared with ASME and FAVOR codes. A case study is also presented. The results show that the method in the RCC-M code that accounts for the influence of cladding on the stress intensity factor (SIF) may be nonconservative. The SIF almost doubles if the weld residual stress is considered. The approaches included in the codes differ in many aspects, which may result in significant differences in the assessment results. Therefore, homogenization of the codes in the long time operation of nuclear power plants is needed.

  16. Application of the French Codes to the Pressurized Thermal Shocks Assessment

    Directory of Open Access Journals (Sweden)

    Mingya Chen

    2016-12-01

    Full Text Available The integrity of a reactor pressure vessel (RPV related to pressurized thermal shocks (PTSs has been extensively studied. This paper introduces an integrity assessment of an RPV subjected to a PTS transient based on the French codes. In the USA, the “screening criterion” for maximum allowable embrittlement of RPV material is developed based on the probabilistic fracture mechanics. However, in the French RCC-M and RSE-M codes, which are developed based on the deterministic fracture mechanics, there is no “screening criterion”. In this paper, the methodology in the RCC-M and RSE-M codes, which are used for PTS analysis, are firstly discussed. The bases of the French codes are compared with ASME and FAVOR codes. A case study is also presented. The results show that the method in the RCC-M code that accounts for the influence of cladding on the stress intensity factor (SIF may be nonconservative. The SIF almost doubles if the weld residual stress is considered. The approaches included in the codes differ in many aspects, which may result in significant differences in the assessment results. Therefore, homogenization of the codes in the long time operation of nuclear power plants is needed.

  17. Application of the French codes to the pressurized thermal shocks assessment

    International Nuclear Information System (INIS)

    Chen, Mingya; Wang, Rong Shan; Yu, Weiwei; Lu, Feng; Zhang, Guo Dong; Xue, Fei; Chen, Zhilin; Qian, Guian; Shi, Jinhua

    2016-01-01

    The integrity of a reactor pressure vessel (RPV) related to pressurized thermal shocks (PTSs) has been extensively studied. This paper introduces an integrity assessment of an RPV subjected to a PTS transient based on the French codes. In the USA, the 'screening criterion' for maximum allowable embrittlement of RPV material is developed based on the probabilistic fracture mechanics. However, in the French RCC-M and RSE-M codes, which are developed based on the deterministic fracture mechanics, there is no 'screening criterion'. In this paper, the methodology in the RCC-M and RSE-M codes, which are used for PTS analysis, are firstly discussed. The bases of the French codes are compared with ASME and FAVOR codes. A case study is also presented. The results show that the method in the RCC-M code that accounts for the influence of cladding on the stress intensity factor (SIF) may be nonconservative. The SIF almost doubles if the weld residual stress is considered. The approaches included in the codes differ in many aspects, which may result in significant differences in the assessment results. Therefore, homogenization of the codes in the long time operation of nuclear power plants is needed

  18. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  19. Self-reported sleep disturbances due to railway noise: exposure-response relationships for nighttime equivalent and maximum noise levels.

    Science.gov (United States)

    Aasvang, Gunn Marit; Moum, Torbjorn; Engdahl, Bo

    2008-07-01

    The objective of the present survey was to study self-reported sleep disturbances due to railway noise with respect to nighttime equivalent noise level (L(p,A,eq,night)) and maximum noise level (L(p,A,max)). A sample of 1349 people in and around Oslo in Norway exposed to railway noise was studied in a cross-sectional survey to obtain data on sleep disturbances, sleep problems due to noise, and personal characteristics including noise sensitivity. Individual noise exposure levels were determined outside of the bedroom facade, the most-exposed facade, and inside the respondents' bedrooms. The exposure-response relationships were analyzed by using logistic regression models, controlling for possible modifying factors including the number of noise events (train pass-by frequency). L(p,A,eq,night) and L(p,A,max) were significantly correlated, and the proportion of reported noise-induced sleep problems increased as both L(p,A,eq,night) and L(p,A,max) increased. Noise sensitivity, type of bedroom window, and pass-by frequency were significant factors affecting noise-induced sleep disturbances, in addition to the noise exposure level. Because about half of the study population did not use a bedroom at the most-exposed side of the house, the exposure-response curve obtained by using noise levels for the most-exposed facade underestimated noise-induced sleep disturbance for those who actually have their bedroom at the most-exposed facade.

  20. A Denotational Semantics for Communicating Unstructured Code

    Directory of Open Access Journals (Sweden)

    Nils Jähnig

    2015-03-01

    Full Text Available An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.

  1. Parallelization characteristics of a three-dimensional whole-core code DeCART

    International Nuclear Information System (INIS)

    Cho, J. Y.; Joo, H.K.; Kim, H. Y.; Lee, J. C.; Jang, M. H.

    2003-01-01

    Neutron transport calculation for three-dimensional amount of computing time but also huge memory. Therefore, whole-core codes such as DeCART need both also parallel computation and distributed memory capabilities. This paper is to implement such parallel capabilities based on MPI grouping and memory distribution on the DeCART code, and then to evaluate the performance by solving the C5G7 three-dimensional benchmark and a simplified three-dimensional SMART core problem. In C5G7 problem with 24 CPUs, a speedup of maximum 22 is obtained on IBM regatta machine and 21 on a LINUX cluster for the MOC kernel, which indicates good parallel performance of the DeCART code. The simplified SMART problem which need about 11 GBytes memory with one processors requires about 940 MBytes, which means that the DeCART code can now solve large core problems on affordable LINUX clusters

  2. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  3. PEAK-TO-AVERAGE POWER RATIO REDUCTION USING CODING AND HYBRID TECHNIQUES FOR OFDM SYSTEM

    Directory of Open Access Journals (Sweden)

    Bahubali K. Shiragapur

    2016-03-01

    Full Text Available In this article, the research work investigated is based on an error correction coding techniques are used to reduce the undesirable Peak-to-Average Power Ratio (PAPR quantity. The Golay Code (24, 12, Reed-Muller code (16, 11, Hamming code (7, 4 and Hybrid technique (Combination of Signal Scrambling and Signal Distortion proposed by us are used as proposed coding techniques, the simulation results shows that performance of Hybrid technique, reduces PAPR significantly as compared to Conventional and Modified Selective mapping techniques. The simulation results are validated through statistical properties, for proposed technique’s autocorrelation value is maximum shows reduction in PAPR. The symbol preference is the key idea to reduce PAPR based on Hamming distance. The simulation results are discussed in detail, in this article.

  4. RCC-E a Design Code for I and C and Electrical Systems

    International Nuclear Information System (INIS)

    Haure, J.M.

    2015-01-01

    The paper deals with the stakes and strength of the RCC-E code applicable to Electrical and Instrumentation and control systems and components as regards dealing with safety class functions. The document is interlacing specifications between Owners, safety authorities, designers, and suppliers IAEA safety guides and IEC standards. The code is periodically updated and published by French Society for Design and Construction rules for Nuclear Island Components (AFCEN). The code is compliant with third generation PWR nuclear islands and aims to suit with national regulations as needed in a companion document. The Feedback experience of Fukushima and the licensing of UKEPR in the framework of Generic Design Assessment are lessons learnt that should be considered in the upgrading of the code. The code gathers a set of requirements and relevant good practices of several PWR design and construction practices related to the electrical and I and C systems and components, and electrical engineering documents dealing with systems, equipment and layout designs. Comprehensive statement including some recent developments will be provided about: - Offsite and onsite sources requirements including sources dealing the total loss of off sites and main onsite sources. - Highlights of a relevant protection level against high frequencies disturbances emitted by lightning strokes, Interfaces data used by any supplier or designer such as site data, rooms temperature, equipment maximum design temperature, alternative current and direct current electrical network voltages and frequency variation ranges, environmental conditions decoupling data, - Environmental Qualification process including normal, mild (earthquake resistant), harsh and severe accident ambient conditions. A suit made approach based on families, which are defined as a combination of mission time, duration and abnormal conditions (pressure, temperature, radiation), enables to better cope with Environmental Qualifications

  5. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  6. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    Science.gov (United States)

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  7. Recommendations for codes and standards to be used for design and fabrication of high level waste canister

    International Nuclear Information System (INIS)

    Bermingham, A.J.; Booker, R.J.; Booth, H.R.; Ruehle, W.G.; Shevekov, S.; Silvester, A.G.; Tagart, S.W.; Thomas, J.A.; West, R.G.

    1978-01-01

    This study identifies codes, standards, and regulatory requirements for developing design criteria for high-level waste (HLW) canisters for commercial operation. It has been determined that the canister should be designed as a pressure vessel without provision for any overpressure protection type devices. It is recommended that the HLW canister be designed and fabricated to the requirements of the ASME Section III Code, Division 1 rules, for Code Class 3 components. Identification of other applicable industry and regulatory guides and standards are provided in this report. Requirements for the Design Specification are found in the ASME Section III Code. It is recommended that design verification be conducted principally with prototype testing which will encompass normal and accident service conditions during all phases of the canister life. Adequacy of existing quality assurance and licensing standards for the canister was investigated. One of the recommendations derived from this study is a requirement that the canister be N stamped. In addition, acceptance standards for the HLW waste should be established and the waste qualified to those standards before the canister is sealed. A preliminary investigation of use of an overpack for the canister has been made, and it is concluded that the use of an overpack, as an integral part of overall canister design, is undesirable, both from a design and economics standpoint. However, use of shipping cask liners and overpack type containers at the Federal repository may make the canister and HLW management safer and more cost effective. There are several possible concepts for canister closure design. These concepts can be adapted to the canister with or without an overpack. A remote seal weld closure is considered to be one of the most suitable closure methods; however, mechanical seals should also be investigated

  8. Imaging VLBI polarimetry data from Active Galactic Nuclei using the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Coughlan Colm P.

    2013-12-01

    Full Text Available Mapping the relativistic jets emanating from AGN requires the use of a deconvolution algorithm to account for the effects of missing baseline spacings. The CLEAN algorithm is the most commonly used algorithm in VLBI imaging today and is suitable for imaging polarisation data. The Maximum Entropy Method (MEM is presented as an alternative with some advantages over the CLEAN algorithm, including better spatial resolution and a more rigorous and unbiased approach to deconvolution. We have developed a MEM code suitable for deconvolving VLBI polarisation data. Monte Carlo simulations investigating the performance of CLEAN and the MEM code on a variety of source types are being carried out. Real polarisation (VLBA data taken at multiple wavelengths have also been deconvolved using MEM, and several of the resulting polarisation and Faraday rotation maps are presented and discussed.

  9. An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels

    Directory of Open Access Journals (Sweden)

    Jian Li

    2005-04-01

    Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.

  10. Variable Parameter Nonlinear Control for Maximum Power Point Tracking Considering Mitigation of Drive-train Load

    Institute of Scientific and Technical Information of China (English)

    Zaiyu; Chen; Minghui; Yin; Lianjun; Zhou; Yaping; Xia; Jiankun; Liu; Yun; Zou

    2017-01-01

    Since mechanical loads exert a significant influence on the life span of wind turbines, the reduction of transient load on drive-train shaft has received more attention when implementing a maximum power point tracking(MPPT) controller.Moreover, a trade-off between the efficiency of wind energy extraction and the load level of drive-train shaft becomes a key issue. However, for the existing control strategies based on nonlinear model of wind turbines, the MPPT efficiencies are improved at the cost of the intensive fluctuation of generator torque and significant increase of transient load on drive train shaft. Hence, in this paper, a nonlinear controller with variable parameter is proposed for improving MPPT efficiency and mitigating transient load on drive-train simultaneously. Then,simulations on FAST(Fatigue, Aerodynamics, Structures, and Turbulence) code and experiments on the wind turbine simulator(WTS) based test bench are presented to verify the efficiency improvement of the proposed control strategy with less cost of drive-train load.

  11. Variable Parameter Nonlinear Control for Maximum Power Point Tracking Considering Mitigation of Drive-train Load

    Institute of Scientific and Technical Information of China (English)

    Zaiyu Chen; Minghui Yin; Lianjun Zhou; Yaping Xia; Jiankun Liu; Yun Zou

    2017-01-01

    Since mechanical loads exert a significant influence on the life span of wind turbines,the reduction of transient load on drive-train shaft has received more attention when implementing a maximum power point tracking (MPPT) controller.Moreover,a trade-off between the efficiency of wind energy extraction and the load level of drive-train shaft becomes a key issue.However,for the existing control strategies based on nonlinear model of wind turbines,the MPPT efficiencies are improved at the cost of the intensive fluctuation of generator torque and significant increase of transient load on drive train shaft.Hence,in this paper,a nonlinear controller with variable parameter is proposed for improving MPPT efficiency and mitigating transient load on drive-train simultaneously.Then,simulations on FAST (Fatigue,Aerodynamics,Structures,and Turbulence) code and experiments on the wind turbine simulator (WTS) based test bench are presented to verify the efficiency improvement of the proposed control strategy with less cost of drive-train load.

  12. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs.

  13. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs

  14. Content Layer progressive Coding of Digital Maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2002-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined...... information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions...... for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images....

  15. OPAL reactor calculations using the Monte Carlo code serpent

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Diego; Villarino, Eduardo [Nuclear Engineering Dept., INVAP S.E., Rio Negro (Argentina)

    2012-03-15

    In the present work the Monte Carlo cell code developed by VTT Serpent v1.1.14 is used to model the MTR fuel assemblies (FA) and control rods (CR) from OPAL (Open Pool Australian Light-water) reactor in order to obtain few-group constants with burnup dependence to be used in the already developed reactor core models. These core calculations are performed using CITVAP 3-D diffusion code, which is well-known reactor code based on CITATION. Subsequently the results are compared with those obtained by the deterministic calculation line used by INVAP, which uses the Collision Probability Condor cell-code to obtain few-group constants. Finally the results are compared with the experimental data obtained from the reactor information for several operation cycles. As a result several evaluations are performed, including a code to code cell comparison at cell and core level and calculation-experiment comparison at core level in order to evaluate the Serpent code actual capabilities. (author)

  16. Spectral amplitude coding OCDMA using and subtraction technique.

    Science.gov (United States)

    Hasoon, Feras N; Aljunid, S A; Samad, M D A; Abdullah, Mohamad Khazani; Shaari, Sahbudin

    2008-03-20

    An optical decoding technique is proposed for a spectral-amplitude-coding-optical code division multiple access, namely, the AND subtraction technique. The theory is being elaborated and experimental results have been done by comparing a double-weight code against the existing code, Hadamard. We have proved that the and subtraction technique gives better bit error rate performance than the conventional complementary subtraction technique against the received power level.

  17. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  18. Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus

    Science.gov (United States)

    Lee, Jungah; Groh, Jennifer M.

    2014-01-01

    Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior. PMID:24454779

  19. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  20. Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    KAUST Repository

    Douik, Ahmed S.

    2015-05-01

    From its introduction to its quindecennial, network coding have built a strong reputation in enhancing packet recovery process and achieving maximum information flow in both wires and wireless networks. Traditional studies focused on optimizing the throughput of the network by proposing complex schemes that achieve optimal delay. With the shift toward distributed computing at mobile devices, throughput and complexity become both critical factors that affect the efficiency of a coding scheme. Instantly decodable network coding imposed itself as a new paradigm in network coding that trades off this two aspects. This paper presents a survey of instantly decodable network coding schemes that are proposed in the literature. The various schemes are identified, categorized and evaluated. Two categories can be distinguished namely the conventional centralized schemes and the distributed or cooperative schemes. For each scheme, the comparison is carried out in terms of reliability, performance, complexity and packet selection methodology. Although the performance is generally inversely proportional to the computation complexity, numerous successful schemes from both the performance and complexity viewpoint are identified.

  1. Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    KAUST Repository

    Douik, Ahmed S.

    2015-01-01

    From its introduction to its quindecennial, network coding have built a strong reputation in enhancing packet recovery process and achieving maximum information flow in both wires and wireless networks. Traditional studies focused on optimizing the throughput of the network by proposing complex schemes that achieve optimal delay. With the shift toward distributed computing at mobile devices, throughput and complexity become both critical factors that affect the efficiency of a coding scheme. Instantly decodable network coding imposed itself as a new paradigm in network coding that trades off this two aspects. This paper presents a survey of instantly decodable network coding schemes that are proposed in the literature. The various schemes are identified, categorized and evaluated. Two categories can be distinguished namely the conventional centralized schemes and the distributed or cooperative schemes. For each scheme, the comparison is carried out in terms of reliability, performance, complexity and packet selection methodology. Although the performance is generally inversely proportional to the computation complexity, numerous successful schemes from both the performance and complexity viewpoint are identified.

  2. Environmental remediation of high-level nuclear waste in geological repository. Modified computer code creates ultimate benchmark in natural systems

    International Nuclear Information System (INIS)

    Peter, Geoffrey J.

    2011-01-01

    Isolation of high-level nuclear waste in permanent geological repositories has been a major concern for over 30 years due to the migration of dissolved radio nuclides reaching the water table (10,000-year compliance period) as water moves through the repository and the surrounding area. Repositories based on mathematical models allow for long-term geological phenomena and involve many approximations; however, experimental verification of long-term processes is impossible. Countries must determine if geological disposal is adequate for permanent storage. Many countries have extensively studied different aspects of safely confining the highly radioactive waste in an underground repository based on the unique geological composition at their selected repository location. This paper discusses two computer codes developed by various countries to study the coupled thermal, mechanical, and chemical process in these environments, and the migration of radionuclide. Further, this paper presents the results of a case study of the Magma-hydrothermal (MH) computer code, modified by the author, applied to nuclear waste repository analysis. The MH code verified by simulating natural systems thus, creating the ultimate benchmark. This approach based on processes similar to those expected near waste repositories currently occurring in natural systems. (author)

  3. Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems

    Science.gov (United States)

    Mazmuder, R. K.; Haidar, S.

    1992-12-01

    An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.

  4. Experiment and analyses on intentional secondary-side depressurization during PWR small break LOCA. Effects of depressurization rate and break area on core liquid level behavior

    International Nuclear Information System (INIS)

    Asaka, Hideaki; Ohtsu, Iwao; Anoda, Yoshinari; Kukita, Yutaka

    1997-01-01

    The effects of the secondary-side depressurization rate and break area on the core liquid level behavior during a PWR small-break LOCA were studied using experimental data from the Large Scale Test Facility (LSTF) and by using analysis results obtained with a JAERI modified version of RELAP5/MOD3 code. The LSTF is a 1/ 48 volumetrically scaled full-height integral model of a Westinghouse-type PWR. The code reproduced the thermal-hydraulic responses, observed in the experiment, for important parameters such as the primary and secondary side pressures and core liquid level behavior. The sensitivity of the core minimum liquid level to the depressurization rate and break area was studied by using the code assessed above. It was found that the core liquid level took a local minimum value for a given break area as a function of secondary side depressurization rate. Further efforts are, however, needed to quantitatively define the maximum core temperature as a function of break area and depressurization rate. (author)

  5. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  6. Description of the COMRADEX code

    International Nuclear Information System (INIS)

    Spangler, G.W.; Boling, M.; Rhoades, W.A.; Willis, C.A.

    1967-01-01

    The COMRADEX Code is discussed briefly and instructions are provided for the use of the code. The subject code was developed for calculating doses from hypothetical power reactor accidents. It permits the user to analyze four successive levels of containment with time-varying leak rates. Filtration, cleanup, fallout and plateout in each containment shell can also be analyzed. The doses calculated include the direct gamma dose from the containment building, the internal doses to as many as 14 organs including the thyroid, bone, lung, etc. from inhaling the contaminated air, and the external gamma doses from the cloud. While further improvements are needed, such as a provision for calculating doses from fallout, rainout and washout, the present code capabilities have a wide range of applicability for reactor accident analysis

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot

    Directory of Open Access Journals (Sweden)

    Goldstein J Scott

    2002-01-01

    Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.

  9. Analysis of the Behavior of CAREM-25 Fuel Rods Using Computer Code BACO

    International Nuclear Information System (INIS)

    Estevez, Esteban; Markiewicz, Mario; Marino, Armando

    2000-01-01

    The thermo-mechanical behavior of a fuel rod subjected to irradiation is a complex process, on which a great quantity of interrelated physical-chemical phenomena are coupled.The code BACO simulates the thermo-mechanical behavior and the evolution of fission gases of a cylindrical rod in operation.The power history of fuel rods, arising from neutronic calculations, is the program input.The code calculates, among others, the temperature distribution and the principal stresses in the pellet and cladding, changes in the porosity and restructuring of pellet, the fission gases release, evolution of the internal gas pressure.In this work some of design limits of CAREM-25's fuel rods are analyzed by means of the computer code BACO.The main variables directly related with the integrity of the fuel rod are: Maximum temperature of pellet; Cladding hoop stresses; Gases pressure in the fuel rod; Cladding axial and radial strains, etc.The analysis of results indicates that, under normal operation conditions, the maximum fuel pellet temperature, cladding stresses, pressure of gases at end of life, etc, are below the design limits considered for the fuel rod of CAREM-25 reactor

  10. Structural reliability codes for probabilistic design

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1997-01-01

    probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...... different, but by and large equally justifiable probabilistic code formats. Thus, the consequence is that a code format based on decision theoretical concepts and formulated as an extension of a probabilistic code format must specify formal values to be used as costs of failure. A principle of prudence...... is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point...

  11. Portable LQCD Monte Carlo code using OpenACC

    Science.gov (United States)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  12. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  13. Calculation code MIXSET for Purex process

    International Nuclear Information System (INIS)

    Gonda, Kozo; Fukuda, Shoji.

    1977-09-01

    MIXSET is a FORTRAN IV calculation code for Purex process that simulate the dynamic behavior of solvent extraction processes in mixer-settlers. Two options permit terminating dynamic phase by time or by achieving steady state. These options also permit continuing calculation successively using new inputs from a arbitrary phase. A third option permits artificial rapid close to steady state and a fourth option permits searching optimum input to satisfy both of specification and recovery rate of product. MIXSET handles maximum chemical system of eight components with or without mutual dependence of the distribution of the components. The chemical system in MIXSET includes chemical reactions and/or decaying reaction. Distribution data can be supplied by third-power polynominal equations or tables, and kinetic data by tables or given constants. The fluctuation of the interfacial level height in settler is converted into the flow rate changes of organic and aqueous stream to follow dynamic behavior of extraction process in detail. MIXSET can be applied to flowsheet study, start up and/or shut down procedure study and real time process management in countercurrent solvent extraction processes. (auth.)

  14. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  15. Two dimension MDW OCDMA code cross-correlation for reduction of phase induced intensity noise

    Directory of Open Access Journals (Sweden)

    Sh. Ahmed Israa

    2017-01-01

    Full Text Available In this paper, we first review 2-D MDW code cross correlation equations and table to be improved significantly by using code correlation properties. These codes can be used in the synchronous optical CDMA systems for multi access interference cancellation and maximum suppress the phase induced intensity noise. Low Psr is due to the reduction of interference noise that is induced by the 2-D MDW code PIIN suppression. High data rate causes increases in BER, requires high effective power and severely deteriorates the system performance. The 2-D W/T MDW code has an excellent system performance where the value of PIIN is suppressed as low as possible at the optimum Psr with high data bit rate. The 2-D MDW code shows better tolerance to PIIN in comparison to others with enhanced system performance. We prove by numerical analysis that the PIIN maximally suppressed by MDW code through the minimizing property of cross correlation in comparison to 2-D PDC and 2-D MQC OCDMA code.scheme systems.

  16. Two dimension MDW OCDMA code cross-correlation for reduction of phase induced intensity noise

    Science.gov (United States)

    Ahmed, Israa Sh.; Aljunid, Syed A.; Nordin, Junita M.; Dulaimi, Layth A. Khalil Al; Matem, Rima

    2017-11-01

    In this paper, we first review 2-D MDW code cross correlation equations and table to be improved significantly by using code correlation properties. These codes can be used in the synchronous optical CDMA systems for multi access interference cancellation and maximum suppress the phase induced intensity noise. Low Psr is due to the reduction of interference noise that is induced by the 2-D MDW code PIIN suppression. High data rate causes increases in BER, requires high effective power and severely deteriorates the system performance. The 2-D W/T MDW code has an excellent system performance where the value of PIIN is suppressed as low as possible at the optimum Psr with high data bit rate. The 2-D MDW code shows better tolerance to PIIN in comparison to others with enhanced system performance. We prove by numerical analysis that the PIIN maximally suppressed by MDW code through the minimizing property of cross correlation in comparison to 2-D PDC and 2-D MQC OCDMA code.scheme systems.

  17. The Genomic Code: Genome Evolution and Potential Applications

    KAUST Repository

    Bernardi, Giorgio

    2016-01-25

    The genome of metazoans is organized according to a genomic code which comprises three laws: 1) Compositional correlations hold between contiguous coding and non-coding sequences, as well as among the three codon positions of protein-coding genes; these correlations are the consequence of the fact that the genomes under consideration consist of fairly homogeneous, long (≥200Kb) sequences, the isochores; 2) Although isochores are defined on the basis of purely compositional properties, GC levels of isochores are correlated with all tested structural and functional properties of the genome; 3) GC levels of isochores are correlated with chromosome architecture from interphase to metaphase; in the case of interphase the correlation concerns isochores and the three-dimensional “topological associated domains” (TADs); in the case of mitotic chromosomes, the correlation concerns isochores and chromosomal bands. Finally, the genomic code is the fourth and last pillar of molecular biology, the first three pillars being 1) the double helix structure of DNA; 2) the regulation of gene expression in prokaryotes; and 3) the genetic code.

  18. Input data required for specific performance assessment codes

    International Nuclear Information System (INIS)

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  19. Application of the MELCOR code to design basis PWR large dry containment analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Jesse; Notafrancesco, Allen (USNRC, Office of Nuclear Regulatory Research, Rockville, MD); Tills, Jack Lee (Jack Tills & Associates, Inc., Sandia Park, NM)

    2009-05-01

    The MELCOR computer code has been developed by Sandia National Laboratories under USNRC sponsorship to provide capability for independently auditing analyses submitted by reactor manufactures and utilities. MELCOR is a fully integrated code (encompassing the reactor coolant system and the containment building) that models the progression of postulated accidents in light water reactor power plants. To assess the adequacy of containment thermal-hydraulic modeling incorporated in the MELCOR code for application to PWR large dry containments, several selected demonstration designs were analyzed. This report documents MELCOR code demonstration calculations performed for postulated design basis accident (DBA) analysis (LOCA and MSLB) inside containment, which are compared to other code results. The key processes when analyzing the containment loads inside PWR large dry containments are (1) expansion and transport of high mass/energy releases, (2) heat and mass transfer to structural passive heat sinks, and (3) containment pressure reduction due to engineered safety features. A code-to-code benchmarking for DBA events showed that MELCOR predictions of maximum containment loads were equivalent to similar predictions using a qualified containment code known as CONTAIN. This equivalency was found to apply for both single- and multi-cell containment models.

  20. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  1. The Effect of Target Language and Code-Switching on the Grammatical Performance and Perceptions of Elementary-Level College French Students

    Science.gov (United States)

    Viakinnou-Brinson, Lucie; Herron, Carol; Cole, Steven P.; Haight, Carrie

    2012-01-01

    Grammar instruction is at the center of the target language (TL) and code-switching debate. Discussion revolves around whether grammar should be taught in the TL or using the TL and the native language (L1). This study investigated the effects of French-only grammar instruction and French/English grammar instruction on elementary-level students'…

  2. Shallow-land burial of low-level radioactive wastes: preliminary simulations of long-term health risks

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.; Hiromoto, G.

    1982-01-01

    PRESTO, a computer code developed for the Environmental Protection Agency for the evaluation of possible health effects associated with shallow-land rad-waste burial areas, has been used to perform simulations for three such sites. Preliminary results for the 1000 y period following site closure suggest that shallow burial, at properly chosen sites, is indeed an appropriate disposal practice for low-level wastes. Periods of maximum risk to subject populations are also inferred

  3. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca, M.A.; Torres, L.A.; Cornejo, N.; Martin, G.

    2008-01-01

    Full text: MIRD formalism at voxel level has been suggested as an optional methodology to perform internal radiation dosimetry calculation during internal radiation therapy in Nuclear Medicine. Voxel S values for Y 90 , 131 I, 32 P, 99m Tc and 89 Sr have been published to different sizes. Currently, 188 Re has been proposed as a promising radionuclide for therapy due to its physical features and availability from generators. The main objective of this work was to estimate the voxel S values for 188 Re at cubical geometry using the MCNP-4C code for the simulations of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxel were estimated and reported for 188 Re and Y 90 . A comparison of voxel S values computed with the MCNP code and the data reported in MIRD Pamphlet 17 for 90 Y was performed in order to evaluate our results. (author)

  4. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  5. Research on network maximum flows algorithm of cascade level graph%级连层次图的网络最大流算法研究

    Institute of Scientific and Technical Information of China (English)

    潘荷新; 伊崇信; 李满

    2011-01-01

    给出一种通过构造网络级连层次图的方法,来间接求出最大网络流的算法.对于给定的有n个顶点,P条边的网络N=(G,s,t,C),该算法可在O(n2)时间内快速求出流经网络N的最大网络流及达最大流时的网络流.%This paper gives an algoritm that structures a network cascade level graph to find out maximum flow of the network indirectly.For the given network N=(G,s,t,C) that has n vetexes and e arcs,this algorithm finds out the maximum value of the network flow fast in O(n2) time that flows from the network N and the network flows when the value of the one reach maximum.

  6. Survey of Codes Employing Nuclear Damage Assessment

    Science.gov (United States)

    1977-10-01

    surveyed codes were com- DO 73Mu 1473 ETN OF 1NOVSSSOLETE UNCLASSIFIED 1 SECURITY CLASSIFICATION OF THIS f AGE (Wh*11 Date Efntered)S<>-~C. I UNCLASSIFIED...level and above) TALLEY/TOTEM not nuclear TARTARUS too highly aggregated (battalion level and above) UNICORN highly aggregated force allocation code...vulnerability data can bq input by the user as he receives them, and there is the abil ’ity to replay any situation using hindsight. The age of target

  7. SERPENT Monte Carlo reactor physics code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2010-01-01

    SERPENT is a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, developed at VTT Technical Research Centre of Finland since 2004. The code is specialized in lattice physics applications, but the universe-based geometry description allows transport simulation to be carried out in complicated three-dimensional geometries as well. The suggested applications of SERPENT include generation of homogenized multi-group constants for deterministic reactor simulator calculations, fuel cycle studies involving detailed assembly-level burnup calculations, validation of deterministic lattice transport codes, research reactor applications, educational purposes and demonstration of reactor physics phenomena. The Serpent code has been publicly distributed by the OECD/NEA Data Bank since May 2009 and RSICC in the U. S. since March 2010. The code is being used in some 35 organizations in 20 countries around the world. This paper presents an overview of the methods and capabilities of the Serpent code, with examples in the modelling of WWER-440 reactor physics. (Author)

  8. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?; Ethik-Codes. Feigenblatt, Ballast oder Kulturelement fuer den Strahlenschutz?

    Energy Technology Data Exchange (ETDEWEB)

    Gellermann, Rainer [Nuclear Control and Consulting GmbH, Braunschweig (Germany)

    2014-07-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  9. Maximum Permissible Risk Levels for Human Intake of Soil Contaminants: Fourth Series of Compounds

    NARCIS (Netherlands)

    Janssen PJCM; Apeldoorn ME van; Engelen JGM van; Schielen PCJI; Wouters MFA; CSR

    1998-01-01

    This report documents the human-toxicological risk assessment work done in 1996 and 1997 at RIVM's Centre for Substances and Risk Assessment within the scope of the RIVM project on soil intervention values for soil clean-up. The method used for derivation of the Maximum Permissible Risk, as

  10. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  11. One way quantum repeaters with quantum Reed-Solomon codes

    OpenAIRE

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-01-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of $d$-level systems for large dimension $d$. We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generation of quantum repeaters using quan...

  12. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  13. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  14. One-way quantum repeaters with quantum Reed-Solomon codes

    Science.gov (United States)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  15. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  16. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  17. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  18. Network Coding Protocols for Data Gathering Applications

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    Tunable sparse network coding (TSNC) with various sparsity levels of the coded packets and different feedback mechanisms is analysed in the context of data gathering applications in multi-hop networks. The goal is to minimize the completion time, i.e., the total time required to collect all data ...

  19. Reliability of cause of death coding: an international comparison.

    Science.gov (United States)

    Antini, Carmen; Rajs, Danuta; Muñoz-Quezada, María Teresa; Mondaca, Boris Andrés Lucero; Heiss, Gerardo

    2015-07-01

    This study evaluates the agreement of nosologic coding of cardiovascular causes of death between a Chilean coder and one in the United States, in a stratified random sample of death certificates of persons aged ≥ 60, issued in 2008 in the Valparaíso and Metropolitan regions, Chile. All causes of death were converted to ICD-10 codes in parallel by both coders. Concordance was analyzed with inter-coder agreement and Cohen's kappa coefficient by level of specification ICD-10 code for the underlying cause and the total causes of death coding. Inter-coder agreement was 76.4% for all causes of death and 80.6% for the underlying cause (agreement at the four-digit level), with differences by the level of specification of the ICD-10 code, by line of the death certificate, and by number of causes of death per certificate. Cohen's kappa coefficient was 0.76 (95%CI: 0.68-0.84) for the underlying cause and 0.75 (95%CI: 0.74-0.77) for the total causes of death. In conclusion, causes of death coding and inter-coder agreement for cardiovascular diseases in two regions of Chile are comparable to an external benchmark and with reports from other countries.

  20. Temperature distributions in a salt formation used for the ultimate disposal of high-level radioactive waste

    International Nuclear Information System (INIS)

    Ploumen, P.

    1980-01-01

    In the Federal Republic of Germany the works on waste disposal is focussed on the utilization of a salt formation for ultimate disposal of radioactive wastes. Heat released from the high-level waste will be dissipated in the salt and the surrounding geologic formations. The occuring temperature distributions will be calculated with computer codes. A survey of the developed computer codes will be shown; the results for a selected example, taking into account the loading sequence of the waste, the mine ventilation as well as an air gap between the waste and the salt, will be discussed. Furthermore it will be shown that by varying the disposal parameters, the maximum salt temperature can be below any described value. (Auth.)

  1. Selection of a computer code for Hanford low-level waste engineered-system performance assessment. Revision 1

    International Nuclear Information System (INIS)

    McGrail, B.P.; Bacon, D.H.

    1998-02-01

    Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites

  2. System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels

    Directory of Open Access Journals (Sweden)

    Kam Tai Chan

    2005-05-01

    Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time.

  3. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  4. LDPC coding for QKD at higher photon flux levels based on spatial entanglement of twin beams in PDC

    International Nuclear Information System (INIS)

    Daneshgaran, Fred; Mondin, Marina; Bari, Inam

    2014-01-01

    Twin beams generated by Parametric Down Conversion (PDC) exhibit quantum correlations that has been effectively used as a tool for many applications including calibration of single photon detectors. By now, detection of multi-mode spatial correlations is a mature field and in principle, only depends on the transmission and detection efficiency of the devices and the channel. In [2, 4, 5], the authors utilized their know-how on almost perfect selection of modes of pairwise correlated entangled beams and the optimization of the noise reduction to below the shot-noise level, for absolute calibration of Charge Coupled Device (CCD) cameras. The same basic principle is currently being considered by the same authors for possible use in Quantum Key Distribution (QKD) [3, 1]. The main advantage in such an approach would be the ability to work with much higher photon fluxes than that of a single photon regime that is theoretically required for discrete variable QKD applications (in practice, very weak laser pulses with mean photon count below one are used).The natural setup of quantization of CCD detection area and subsequent measurement of the correlation statistic needed to detect the presence of the eavesdropper Eve, leads to a QKD channel model that is a Discrete Memoryless Channel (DMC) with a number of inputs and outputs that can be more than two (i.e., the channel is a multi-level DMC). This paper investigates the use of Low Density Parity Check (LDPC) codes for information reconciliation on the effective parallel channels associated with the multi-level DMC. The performance of such codes are shown to be close to the theoretical limits.

  5. The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve.

    Science.gov (United States)

    Baiget, Ernest; Corbi, Francisco; Fuentes, Juan Pedro; Fernández-Fernández, Jaime

    2016-12-01

    The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg) were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation). Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion.

  6. Modelling of WWER fuel rod during LOCA conditions using FEM code ANSYS

    International Nuclear Information System (INIS)

    Bogatyr, S. M.; Krupkin, A. V.; Kuznetsov, V. I.; Novikov, V. V.; Petrov, O. M.; Shestopalov, A. A.

    2013-01-01

    The report presents the results of the computer simulation of the IFA-650.6 experiment, the sixth test in Halden LOCA test project series, performed in May 18, 2007 with a pre-irradiated WWER-440 fuel with maximum burnup of 56 MWd/kgU. The thermo-mechanical analysis was fulfilled with the license finite element ANSYS code package.The calculation was carried out with the 2D axisymmetric and 3D problem definitions. Analysis of the calculational results shows that the ANSYS code can adequately simulate thermo-mechanical behavior of cladding under IFA-650.6 test conditions. (authors)

  7. Promoter Analysis Reveals Globally Differential Regulation of Human Long Non-Coding RNA and Protein-Coding Genes

    KAUST Repository

    Alam, Tanvir

    2014-10-02

    Transcriptional regulation of protein-coding genes is increasingly well-understood on a global scale, yet no comparable information exists for long non-coding RNA (lncRNA) genes, which were recently recognized to be as numerous as protein-coding genes in mammalian genomes. We performed a genome-wide comparative analysis of the promoters of human lncRNA and protein-coding genes, finding global differences in specific genetic and epigenetic features relevant to transcriptional regulation. These two groups of genes are hence subject to separate transcriptional regulatory programs, including distinct transcription factor (TF) proteins that significantly favor lncRNA, rather than coding-gene, promoters. We report a specific signature of promoter-proximal transcriptional regulation of lncRNA genes, including several distinct transcription factor binding sites (TFBS). Experimental DNase I hypersensitive site profiles are consistent with active configurations of these lncRNA TFBS sets in diverse human cell types. TFBS ChIP-seq datasets confirm the binding events that we predicted using computational approaches for a subset of factors. For several TFs known to be directly regulated by lncRNAs, we find that their putative TFBSs are enriched at lncRNA promoters, suggesting that the TFs and the lncRNAs may participate in a bidirectional feedback loop regulatory network. Accordingly, cells may be able to modulate lncRNA expression levels independently of mRNA levels via distinct regulatory pathways. Our results also raise the possibility that, given the historical reliance on protein-coding gene catalogs to define the chromatin states of active promoters, a revision of these chromatin signature profiles to incorporate expressed lncRNA genes is warranted in the future.

  8. Psacoin level S intercomparison: An International code intercomparison exercise on a hypothetical safety assessment case study for radioactive waste disposal systems

    International Nuclear Information System (INIS)

    1993-06-01

    This report documents the Level S exercise of the Probabilistic System Assessment Group (PSAG). Level S is the fifth in a series of Probabilistic Code Intercomparison (PSACOIN) exercises designed to contribute to the verification of probabilistic codes and methodologies that may be used in assessing the safety of radioactive waste disposal systems and concepts. The focus of the Level S exercise lies on sensitivity analysis. Given a common data set of model output and input values the participants were asked to identify both the underlying model's most important parameters (deterministic sensitivity analysis) and the link between the distributions of the input and output values (distribution sensitivity analysis). Agreement was generally found where it was expected and the exercise has achieved its objectives in acting as a focus for testing and discussing sensitivity analysis issues. Among the outstanding issues that have been identified are: (i) that techniques for distribution sensitivity analysis are needed that avoid the problem of statistical noise; (ii) that further investigations are warranted on the most appropriate way of handling large numbers of effectively zero results generated by Monte Carlo sampling; and (iii) that methods need to be developed for demonstrating that the results of sensitivity analysis are indeed correct

  9. Technical development for geological disposal of high-level radioactive wastes

    International Nuclear Information System (INIS)

    Asano, Hidekazu; Sugino, Hiroyuki; Kawakami, Susumu; Yamanaka, Yumiko

    1997-01-01

    Technical developments for geological disposal of high-level radioactive wastes materials research and design technique for engineered barriers (overpack and buffer material) were studied to evaluate more reliable disposal systems for high-level radioactive wastes. A lifetime prediction model for the maximum corrosion depth of carbon steel was developed. A preferable alloys evaluation method for crevice corrosion was established for titanium. Swelling pressure and water permeability of bentonite as a buffer material was measured, and coupled hydro-thermo-mechanical analysis code for bentonite was also studied. The CIP (cold isostatic pressing) method for monolithically formed buffer material was tested. A concept study on operation equipment for the disposal site was performed. Activities of microorganisms involved in underground performance were investigated. (author)

  10. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  11. Validity of vascular trauma codes at major trauma centres.

    Science.gov (United States)

    Altoijry, Abdulmajeed; Al-Omran, Mohammed; Lindsay, Thomas F; Johnston, K Wayne; Melo, Magda; Mamdani, Muhammad

    2013-12-01

    The use of administrative databases in vascular injury research has been increasing, but the validity of the diagnosis codes used in this research is uncertain. We assessed the positive predictive value (PPV) of International Classification of Diseases, tenth revision (ICD-10), vascular injury codes in administrative claims data in Ontario. We conducted a retrospective validation study using the Canadian Institute for Health Information Discharge Abstract Database, an administrative database that records all hospital admissions in Canada. We evaluated 380 randomly selected hospital discharge abstracts from the 2 main trauma centres in Toronto, Ont., St.Michael's Hospital and Sunnybrook Health Sciences Centre, between Apr. 1, 2002, and Mar. 31, 2010. We then compared these records with the corresponding patients' hospital charts to assess the level of agreement for procedure coding. We calculated the PPV and sensitivity to estimate the validity of vascular injury diagnosis coding. The overall PPV for vascular injury coding was estimated to be 95% (95% confidence interval [CI] 92.3-96.8). The PPV among code groups for neck, thorax, abdomen, upper extremity and lower extremity injuries ranged from 90.8 (95% CI 82.2-95.5) to 97.4 (95% CI 91.0-99.3), whereas sensitivity ranged from 90% (95% CI 81.5-94.8) to 98.7% (95% CI 92.9-99.8). Administrative claims hospital discharge data based on ICD-10 diagnosis codes have a high level of validity when identifying cases of vascular injury. Observational Study Level III.

  12. Hybrid coded aperture and Compton imaging using an active mask

    International Nuclear Information System (INIS)

    Schultz, L.J.; Wallace, M.S.; Galassi, M.C.; Hoover, A.S.; Mocko, M.; Palmer, D.M.; Tornga, S.R.; Kippen, R.M.; Hynes, M.V.; Toolin, M.J.; Harris, B.; McElroy, J.E.; Wakeford, D.; Lanza, R.C.; Horn, B.K.P.; Wehe, D.K.

    2009-01-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  13. Maximum power point tracking for photovoltaic applications by using two-level DC/DC boost converter

    Science.gov (United States)

    Moamaei, Parvin

    Recently, photovoltaic (PV) generation is becoming increasingly popular in industrial applications. As a renewable and alternative source of energy they feature superior characteristics such as being clean and silent along with less maintenance problems compared to other sources of the energy. In PV generation, employing a Maximum Power Point Tracking (MPPT) method is essential to obtain the maximum available solar energy. Among several proposed MPPT techniques, the Perturbation and Observation (P&O;) and Model Predictive Control (MPC) methods are adopted in this work. The components of the MPPT control system which are P&O; and MPC algorithms, PV module and high gain DC-DC boost converter are simulated in MATLAB Simulink. They are evaluated theoretically under rapidly and slowly changing of solar irradiation and temperature and their performance is shown by the simulation results, finally a comprehensive comparison is presented.

  14. Incorporating Code-Based Software in an Introductory Statistics Course

    Science.gov (United States)

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  15. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concem for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure.

  16. Qualifying codes under software quality assurance: Two examples as guidelines for codes that are existing or under development

    International Nuclear Information System (INIS)

    Mangold, D.

    1993-05-01

    Software quality assurance is an area of concern for DOE, EPA, and other agencies due to the poor quality of software and its documentation they have received in the past. This report briefly summarizes the software development concepts and terminology increasingly employed by these agencies and provides a workable approach to scientific programming under the new requirements. Following this is a practical description of how to qualify a simulation code, based on a software QA plan that has been reviewed and officially accepted by DOE/OCRWM. Two codes have recently been baselined and qualified, so that they can be officially used for QA Level 1 work under the DOE/OCRWM QA requirements. One of them was baselined and qualified within one week. The first of the codes was the multi-phase multi-component flow code TOUGH version 1, an already existing code, and the other was a geochemistry transport code STATEQ that was under development The way to accomplish qualification for both types of codes is summarized in an easy-to-follow step-by step fashion to illustrate how to baseline and qualify such codes through a relatively painless procedure

  17. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  18. The LEONAR code: a new tool for PSA Level 2 analyses

    International Nuclear Information System (INIS)

    Tourniaire, B; Spindler, B.; Ratel, G.; Seiler, J.M.; Iooss, B.; Marques, M.; Gaudier, F.; Greffier, G.

    2011-01-01

    The LEONAR code, complementary to integral codes such as MAAP or ASTEC, is a new severe accident simulation tool which can calculate easily 1000 late phase reactor situations within a few hours and provide a statistical evaluation of the situations. LEONAR can be used for the analysis of the impact on the failure probabilities of specific Severe Accident Management measures (for instance: water injection) or design modifications (for instance: pressure vessel flooding or dedicated reactor pit flooding), or to focus the research effort on key phenomena. The starting conditions for LEONAR are a set of core melting situations that are separately calculated from a core degradation code (such as MAAP, which is used by EDF). LEONAR describes the core melt evolution after flooding in the core, the corium relocation in the lower head (under dry and wet conditions), the evolution of corium in the lower head including the effect of flooding, the vessel failure, corium relocation in the reactor cavity, interaction between corium and basemat concrete, possible corium spreading in the neighbour rooms, on the containment floor. Scenario events as well as specific physical model parameters are characterised by a probability density distribution. The probabilistic evaluation is performed by URANIE that is coupled to the physical calculations. The calculation results are treated in a statistical way in order to provide easily usable information. This tool can be used to identify the main parameters that influence corium coolability for severe accident late phases. It is aimed to replace efficiently PIRT exercises. An important impact of such a tool is that it can be used to make a demonstration that the probability of basemat failure can be significantly reduced by coupling a number of separate severe accident management measures or design modifications despite each separate measure is not sufficient by itself to avoid the failure. (authors)

  19. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  20. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  1. On the progress towards probabilistic basis for deterministic codes

    International Nuclear Information System (INIS)

    Ellyin, F.

    1975-01-01

    Fundamentals arguments for a probabilistic basis of codes are presented. A class of code formats is outlined in which explicit statistical measures of uncertainty of design variables are incorporated. The format looks very much like present codes (deterministic) except for having probabilistic background. An example is provided whereby the design factors are plotted against the safety index, the probability of failure, and the risk of mortality. The safety level of the present codes is also indicated. A decision regarding the new probabilistically based code parameters thus could be made with full knowledge of implied consequences

  2. Indoor Off-Body Wireless Communication: Static Beamforming versus Space-Time Coding

    Directory of Open Access Journals (Sweden)

    Patrick Van Torre

    2012-01-01

    Full Text Available The performance of beamforming versus space-time coding using a body-worn textile antenna array is experimentally evaluated for an indoor environment, where a walking rescue worker transmits data in the 2.45 GHz ISM band, relying on a vertical textile four-antenna array integrated into his garment. The two transmission scenarios considered are static beamforming at low-elevation angles and space-time code based transmit diversity. Signals are received by a base station equipped with a horizontal array of four dipole antennas providing spatial receive diversity through maximum-ratio combining. Signal-to-noise ratios, bit error rate characteristics, and signal correlation properties are assessed for both off-body transmission scenarios. Without receiver diversity, the performance of space-time coding is generally better. In case of fourth-order receiver diversity, beamforming is superior in line-of-sight conditions. For non-line-of-sight propagation, the space-time codes perform better as soon as bit error rates are low enough for a reliable data link.

  3. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  4. Spectrum unfolding, sensitivity analysis and propagation of uncertainties with the maximum entropy deconvolution code MAXED

    CERN Document Server

    Reginatto, M; Neumann, S

    2002-01-01

    MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.

  5. Challenges to code status discussions for pediatric patients.

    Directory of Open Access Journals (Sweden)

    Katherine E Kruse

    Full Text Available In the context of serious or life-limiting illness, pediatric patients and their families are faced with difficult decisions surrounding appropriate resuscitation efforts in the event of a cardiopulmonary arrest. Code status orders are one way to inform end-of-life medical decision making. The objectives of this study are to evaluate the extent to which pediatric providers have knowledge of code status options and explore the association of provider role with (1 knowledge of code status options, (2 perception of timing of code status discussions, (3 perception of family receptivity to code status discussions, and (4 comfort carrying out code status discussions.Nurses, trainees (residents and fellows, and attending physicians from pediatric units where code status discussions typically occur completed a short survey questionnaire regarding their knowledge of code status options and perceptions surrounding code status discussions.Single center, quaternary care children's hospital.203 nurses, 31 trainees, and 29 attending physicians in 4 high-acuity pediatric units responded to the survey (N = 263, 90% response rate. Based on an objective knowledge measure, providers demonstrate poor understanding of available code status options, with only 22% of providers able to enumerate more than two of four available code status options. In contrast, provider groups self-report high levels of familiarity with available code status options, with attending physicians reporting significantly higher levels than nurses and trainees (p = 0.0125. Nurses and attending physicians show significantly different perception of code status discussion timing, with majority of nurses (63.4% perceiving discussions as occurring "too late" or "much too late" and majority of attending physicians (55.6% perceiving the timing as "about right" (p<0.0001. Attending physicians report significantly higher comfort having code status discussions with families than do nurses or trainees

  6. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  7. Information preserving coding for multispectral data

    Science.gov (United States)

    Duan, J. R.; Wintz, P. A.

    1973-01-01

    A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.

  8. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta

    2003-01-01

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  9. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  10. Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems

    KAUST Repository

    Zaib, Alam; Al-Naffouri, Tareq Y.

    2014-01-01

    This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.

  11. Modification of the ASME code z-factor for circumferential surface crack in nuclear ferritic pipings

    International Nuclear Information System (INIS)

    Choi, Young Hwan; Chung, Yon Ki; Koh, Wan Young; Lee, Joung Bae

    1996-01-01

    The purpose of this paper is to modify the ASME Code Z-Factor, which is used in the evaluation of circumferential surface crack in nuclear ferritic pipings. The ASME Code Z-Factor is a load multiplier to compensate plastic load with elasto-plastic load. The current ASME Code Z-Factor underestimates pipe maximum load. In this study, the original SC. TNP method is modified first because the original SC. TNP method has a problem that the maximum allowable load predicted from the original SC. TNP method is slightly higher than that measured from the experiment. Then the new Z-Factor is developed using the modified SC. TNP method. The desirability of both the modified SC. TNP method and the new Z-Factor is examined using the experimental results for the circumferential surface crack in pipings. The results show that (1) the modified SC. TNP method is good for predicting the circumferential surface crack behavior in pipings, and (2) the Z-Factor obtained from the modified SC. TNP method well predicts the behavior of circumferential surface crack in ferritic pipings. 30 refs., 13 figs., 4 tabs. (author)

  12. Validation and application of help code used for design and review of cover of low and intermediate level radioactive waste disposal in near-surface facilities

    International Nuclear Information System (INIS)

    Fan Zhiwen; Gu Cunli; Zhang Jinsheng; Liu Xiuzhen

    1996-01-01

    The authors describes validation and application of HELP code used by the United States Environmental Protective Agency for design and review of cover of low and intermediate level radioactive waste disposal in near-surface facilities. The HELP code was validated using data of field aerated moisture movement test by China Institute for Radiation Protection. The results show that simulation of HELP code is reasonable. Effects of surface layer thickness and surface treatment on moisture distribution in a cover was simulated with HELP code in the conditions of south-west China. The simulation results demonstrated that surface plantation of a cover plays very important role in moisture distribution in the cover. Special attention should be paid in cover design. In humid area, radioactive waste disposal safety should take full consideration with functions of chemical barrier. It was recommended that engineering economy should be added in future cover research so as to achieve optimization of cover design

  13. High explosive programmed burn in the FLAG code

    Energy Technology Data Exchange (ETDEWEB)

    Mandell, D.; Burton, D.; Lund, C.

    1998-02-01

    The models used to calculate the programmed burn high-explosive lighting times for two- and three-dimensions in the FLAG code are described. FLAG uses an unstructured polyhedra grid. The calculations were compared to exact solutions for a square in two dimensions and for a cube in three dimensions. The maximum error was 3.95 percent in two dimensions and 4.84 percent in three dimensions. The high explosive lighting time model described has the advantage that only one cell at a time needs to be considered.

  14. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  16. Amplitude-to-code converter for photomultipliers operating at high loadings

    International Nuclear Information System (INIS)

    Arkhangel'skij, B.V.; Evgrafov, G.N.; Pishchal'nikov, Yu.M.; Shuvalov, R.S.

    1982-01-01

    An 11-bit amplitude-to-code converter intended for the analysis of photomultiplier pulses under high loadings is described. To decrease the volume of digit electronics in the converter an analog memory on capacities is envisaged. A well-known bridge circuit with diodes on the main carriers is selected as a gating circuit. The gate control is realized by a switching circuit on fast-response transistors with boundary frequency of 1.2-1.5 GHz. The converter main characteristics are given, namely, maximum output signal amplitude equal to -1.5 V, minimum pulse selection duration of 10 ns, maximum number of counts at Usub(input)=-1.0 V and tsub(selection)=50 ns amounting to 1400, integral nonlinearity of +-0.1%, conversion temperature instability of 0.2%/deg C in the temperature range of (+10-+40) deg C, maximum time of data storage equal to 300 ms, conversion coefficient instability of 0.42 counts, number of channels in a unit CAMAC block equal to 12

  17. Certification plan for safety and PRA codes

    International Nuclear Information System (INIS)

    Toffer, H.; Crowe, R.D.; Ades, M.J.

    1990-05-01

    A certification plan for computer codes used in Safety Analyses and Probabilistic Risk Assessment (PRA) for the operation of the Savannah River Site (SRS) reactors has been prepared. An action matrix, checklists, and a time schedule have been included in the plan. These items identify what is required to achieve certification of the codes. A list of Safety Analysis and Probabilistic Risk Assessment (SA ampersand PRA) computer codes covered by the certification plan has been assembled. A description of each of the codes was provided in Reference 4. The action matrix for the configuration control plan identifies code specific requirements that need to be met to achieve the certification plan's objectives. The checklist covers the specific procedures that are required to support the configuration control effort and supplement the software life cycle procedures based on QAP 20-1 (Reference 7). A qualification checklist for users establishes the minimum prerequisites and training for achieving levels of proficiency in using configuration controlled codes for critical parameter calculations

  18. A bar coding system for environmental projects

    International Nuclear Information System (INIS)

    Barber, R.B.; Hunt, B.J.; Burgess, G.M.

    1988-01-01

    This paper presents BeCode systems, a bar coding system which provides both nuclear and commercial clients with a data capture and custody management program that is accurate, timely, and beneficial to all levels of project operations. Using bar code identifiers is an essentially paperless and error-free method which provides more efficient delivery of data through its menu card-driven structure, which speeds collection of essential data for uploading to a compatible device. The effects of this sequence include real-time information for operator analysis, management review, audits, planning, scheduling, and cost control

  19. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  20. Tardos fingerprinting codes in the combined digit model

    NARCIS (Netherlands)

    Skoric, B.; Katzenbeisser, S.; Schaathun, H.G.; Celik, M.U.

    2009-01-01

    We introduce a new attack model for collusion-secure codes, called the combined digit model, which represents signal processing attacks against the underlying watermarking level better than existing models. In this paper, we analyze the performance of two variants of the Tardos code and show that

  1. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  2. Girls Who Code Club | College of Engineering & Applied Science

    Science.gov (United States)

    Olympiad Girls Who Code Club FIRST Tech Challenge NSF I-Corps Site of Southeastern Wisconsin UW-Milwaukee elizabeth_andrews Join UWM's 2017-18 Girls Who Code Club Click above to let us remind you of registration on August 1, 2016! Our Girls Who Code Club will resume in Spring 2018. The Fall 2017 Level 1A and 2A students

  3. Development and application of the BOA code in Spain

    International Nuclear Information System (INIS)

    Tortuero Lopez, C.; Doncel Gutierrez, N.; Culebras, F.

    2012-01-01

    The BOA code allows to quantitatively establish the level of risk of Axial Offset Anomaly and increased deposition of crud on the basis of specific conditions in each case. For this reason, the code is parameterized according to the individual characteristics of each plant. This paper summarizes the results obtained in the implementation of the code, as well as its future perspective.

  4. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-01-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution. - Highlights: • Radiation detection introduces distortions in X- and Gamma-ray spectrum measurements. • UMESTRAT is a graphical tool to unfold X- and Gamma-ray spectra. • UMESTRAT uses the maximum entropy method. • UMESTRAT’s new version produces unfolded spectra with quantitative meaning. • UMESTRAT is a software tool to improve the detector resolution.

  5. GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour

  6. Reduction and resource recycling of high-level radioactive wastes through nuclear transmutation with PHITS code

    International Nuclear Information System (INIS)

    Fujita, Reiko

    2017-01-01

    In the ImPACT program of the Cabinet Office, programs are underway to reduce long-lived fission products (LLFP) contained in high-level radioactive waste through nuclear transmutation, or to recycle/utilize useful nuclear species. This paper outlines this program and describes recent achievements. This program consists of five projects: (1) separation/recovery technology, (2) acquisition of nuclear transmutation data, (3) nuclear reaction theory model and simulation, (4) novel nuclear reaction control and development of elemental technology, and (5) discussions on process concept. The project (1) develops a technology for dissolving vitrified solid, a technology for recovering LLFP from high-level waste liquid, and a technology for separating odd and even lasers. Project (2) acquires the new nuclear reaction data of Pd-107, Zr-93, Se-79, and Cs-135 using RIKEN's RIBF or JAEA's J-PARC. Project (3) improves new nuclear reaction theory and structural model using the nuclear reaction data measured in (2), improves/upgrades nuclear reaction simulation code PHITS, and proposes a promising nuclear transmutation pathway. Project (4) develops an accelerator that realizes the proposed transmutation route and its elemental technology. Project (5) performs the conceptual design of the process to realize (1) to (4), and constructs the scenario of reducing/utilizing high-level radioactive waste to realize this design. (A.O.)

  7. Does a code make a difference – assessing the English code of practice on international recruitment

    Directory of Open Access Journals (Sweden)

    Mensah Kwadwo

    2009-04-01

    Full Text Available Abstract Background This paper draws from research completed in 2007 to assess the effect of the Department of Health, England, Code of Practice for the international recruitment of health professionals. The Department of Health in England introduced a Code of Practice for international recruitment for National Health Service employers in 2001. The Code required National Health Service employers not to actively recruit from low-income countries, unless there was government-to-government agreement. The Code was updated in 2004. Methods The paper examines trends in inflow of health professionals to the United Kingdom from other countries, using professional registration data and data on applications for work permits. The paper also provides more detailed information from two country case studies in Ghana and Kenya. Results Available data show a considerable reduction in inflow of health professionals, from the peak years up to 2002 (for nurses and 2004 (for doctors. There are multiple causes for this decline, including declining demand in the United Kingdom. In Ghana and Kenya it was found that active recruitment was perceived to have reduced significantly from the United Kingdom, but it is not clear the extent to which the Code was influential in this, or whether other factors such as a lack of vacancies in the United Kingdom explains it. Conclusion Active international recruitment of health professionals was an explicit policy intervention by the Department of Health in England, as one key element in achieving rapid staffing growth, particularly in the period 2000 to 2005, but the level of international recruitment has dropped significantly since early 2006. Regulatory and education changes in the United Kingdom in recent years have also made international entry more difficult. The potential to assess the effect of the Code in England is constrained by the limitations in available databases. This is a crucial lesson for those considering a

  8. GRAP, Gamma-Ray Level-Scheme Assignment

    International Nuclear Information System (INIS)

    Franklyn, C.B.

    2002-01-01

    1 - Description of program or function: An interactive program for allocating gamma-rays to an energy level scheme. Procedure allows for searching for new candidate levels of the form: 1) L1 + G(A) + G(B) = L2; 2) G(A) + G(B) = G(C); 3) G(A) + G(B) = C (C is a user defined number); 4) L1 + G(A) + G(B) + G(C) = L2. Procedure indicates intensity balance of feed and decay of each energy level. Provides for optimization of a level energy (and associated error). Overall procedure allows for pre-defining of certain gamma-rays as belonging to particular regions of the level scheme, for example, high energy transition levels, or due to beta- decay. 2 - Method of solution: Search for cases in which the energy difference between two energy levels is equal to a gamma-ray energy within user-defined limits. 3 - Restrictions on the complexity of the problem: Maximum number of gamma-rays: 999; Maximum gamma ray energy: 32000 units; Minimum gamma ray energy: 10 units; Maximum gamma-ray intensity: 32000 units; Minimum gamma-ray intensity: 0.001 units; Maximum number of levels: 255; Maximum level energy: 32000 units; Minimum level energy: 10 units; Maximum error on energy, intensity: 32 units; Minimum error on energy, intensity: 0.001 units; Maximum number of combinations: 6400 (ca); Maximum number of gamma-ray types : 127

  9. Changes in the Global Hydrological Cycle: Lessons from Modeling Lake Levels at the Last Glacial Maximum

    Science.gov (United States)

    Lowry, D. P.; Morrill, C.

    2011-12-01

    Geologic evidence shows that lake levels in currently arid regions were higher and lakes in currently wet regions were lower during the Last Glacial Maximum (LGM). Current hypotheses used to explain these lake level changes include the thermodynamic hypothesis, in which decreased tropospheric water vapor coupled with patterns of convergence and divergence caused dry areas to become more wet and vice versa, the dynamic hypothesis, in which shifts in the jet stream and Inter-Tropical Convergence Zone (ITCZ) altered precipitation patterns, and the evaporation hypothesis, in which lake expansions are attributed to reduced evaporation in a colder climate. This modeling study uses the output of four climate models participating in phase 2 of the Paleoclimate Modeling Intercomparison Project (PMIP2) as input into a lake energy-balance model, in order to test the accuracy of the models and understand the causes of lake level changes. We model five lakes which include the Great Basin lakes, USA; Lake Petén Itzá, Guatemala; Lake Caçó, northern Brazil; Lake Tauca (Titicaca), Bolivia and Peru; and Lake Cari-Laufquen, Argentina. These lakes create a transect through the drylands of North America through the tropics and to the drylands of South America. The models accurately recreate LGM conditions in 14 out of 20 simulations, with the Great Basin lakes being the most robust and Lake Caçó being the least robust, due to model biases in portraying the ITCZ over South America. An analysis of the atmospheric moisture budget from one of the climate models shows that thermodynamic processes contribute most significantly to precipitation changes over the Great Basin, while dynamic processes are most significant for the other lakes. Lake Cari-Laufquen shows a lake expansion that is most likely attributed to reduced evaporation rather than changes in regional precipitation, suggesting that lake levels alone may not be the best indicator of how much precipitation this region

  10. Job coding (PCS 2003): feedback from a study conducted in an Occupational Health Service

    Science.gov (United States)

    Henrotin, Jean-Bernard; Vaissière, Monique; Etaix, Maryline; Malard, Stéphane; Dziurla, Mathieu; Lafon, Dominique

    2016-10-19

    Aim: To examine the quality of manual job coding carried out by occupational health teams with access to a software application that provides assistance in job and business sector coding (CAPS). Methods: Data from a study conducted in an Occupational Health Service were used to examine the first-level coding of 1,495 jobs by occupational health teams according to the French job classification entitled “PSC- Professions and socio-professional categories” (INSEE, 2003 version). A second level of coding was also performed by an experienced coder and the first and second level codes were compared. Agreement between the two coding systems was studied using the kappa coefficient (κ) and frequencies were compared by Chi2 tests. Results: Missing data or incorrect codes were observed for 14.5% of social groups (1 digit) and 25.7% of job codes (4 digits). While agreement between the first two levels of PCS 2003 appeared to be satisfactory (κ=0.73 and κ=0.75), imbalances in reassignment flows were effectively noted. The divergent job code rate was 48.2%. Variation in the frequency of socio-occupational variables was as high as 8.6% after correcting for missing data and divergent codes. Conclusions: Compared with other studies, the use of the CAPS tool appeared to provide effective coding assistance. However, our results indicate that job coding based on PSC 2003 should be conducted using ancillary data by personnel trained in the use of this tool.

  11. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  12. Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Yu, Dong Han; Kim, Seung Hwan.

    1997-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated through wind tunnel experiment. These results will give a physical insight in the development of a new dispersion model. Because there are some discrepancies between the results from Gaussian plume model and those from field test, the effect of terrain on the atmospheric dispersion was investigated by using CTDMPLUS code. Through this study we find that the model which can treat terrain effect is essential in the atmospheric dispersion of radioactive materials and the CTDMPLUS model can be used as a useful tool. And it is suggested that modification of a model and experimental study should be made through the continuous effort. The health effect assessment near the Yonggwang site by using IPE (Individual plant examination) results and its site data was performed. The health effect assessment is an important part of consequence analysis of a nuclear power plant site. The MACCS was used in the assessment. Based on the calculation of CCDF for each risk measure, it is shown that CCDF has a slow slope and thus wide probability distribution in cases of early fatality, early injury, total early fatality risk, and total weighted early fatality risk. And in cases of cancer fatality and population dose within 48km and 80km, the CCDF curve have a steep slope and thus narrow probability distribution. The establishment of methodologies for necessary models for consequence analysis resulting form a server accident in the nuclear power plant was made and a program for consequence analysis was developed. The models include atmospheric transport and diffusion, calculation of exposure doses for various pathways, and assessment of health effects and associated risks. Finally, the economic impact resulting form an accident in a nuclear power plant was investigated. In this study, estimation models for each cost terms that considered in economic

  13. Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Yu, Dong Han; Kim, Seung Hwan

    1997-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated through wind tunnel experiment. These results will give a physical insight in the development of a new dispersion model. Because there are some discrepancies between the results from Gaussian plume model and those from field test, the effect of terrain on the atmospheric dispersion was investigated by using CTDMPLUS code. Through this study we find that the model which can treat terrain effect is essential in the atmospheric dispersion of radioactive materials and the CTDMPLUS model can be used as a useful tool. And it is suggested that modification of a model and experimental study should be made through the continuous effort. The health effect assessment near the Yonggwang site by using IPE (Individual plant examination) results and its site data was performed. The health effect assessment is an important part of consequence analysis of a nuclear power plant site. The MACCS was used in the assessment. Based on the calculation of CCDF for each risk measure, it is shown that CCDF has a slow slope and thus wide probability distribution in cases of early fatality, early injury, total early fatality risk, and total weighted early fatality risk. And in cases of cancer fatality and population dose within 48km and 80km, the CCDF curve have a steep slope and thus narrow probability distribution. The establishment of methodologies for necessary models for consequence analysis resulting form a server accident in the nuclear power plant was made and a program for consequence analysis was developed. The models include atmospheric transport and diffusion, calculation of exposure doses for various pathways, and assessment of health effects and associated risks. Finally, the economic impact resulting form an accident in a nuclear power plant was investigated. In this study, estimation models for each cost terms that considered in economic

  14. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  15. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. Use of a commercial heat transfer code to predict horizontally oriented spent fuel rod temperatures

    International Nuclear Information System (INIS)

    Wix, S.D.; Koski, J.A.

    1992-01-01

    Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. The maximum rod temperature is a limiting factor in the amount of spent fuel that can be loaded in a transportation cask. The scope of this work is to demonstrate that reasonable and conservative spent fuel rod temperature predictions can be made using commercially available thermal analysis codes. The demonstration is accomplished by a comparison between numerical temperature predictions, with a commercially available thermal analysis code, and experimental temperature data for electrical rod heaters simulating a horizontally oriented spent fuel rod bundle

  17. Verification of computer code FPRETAIN with respect to RIA data from SPERT and PBF experiments

    International Nuclear Information System (INIS)

    Heo, Young-Ho; Yanagisawa, Kazuaki.

    1992-12-01

    This report presents the comparisons between calculated and measured fuel rod behavior and the analysis of stress for preirradiated LWR type fuel rods during reactivity initiated accident (RIA) conditions. For the calculations, FPRETAIN computer code which can simulate the fuel behavior under RIA conditions at extended burnup stage was used. For the experimental, data obtained from the Special Power Excursion Reactor Test (SPERT) and the Power Burst Facility (PBF) tests were used. The results of the comparisons showed that the FPRETAIN code predicted well the tendency of the fuel rod behavior during RIA. From the results of the stress analysis, it was found that the maximum hoop stress of cladding was not proportional to the energy deposition of fuel rod. Calculated cladding maximum hoop stress of failed fuel at high burnup was not lower than that of intact fresh or low burnup fuel. (author)

  18. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  19. Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading

    Directory of Open Access Journals (Sweden)

    Pierluigi Guerriero

    2015-01-01

    Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.

  20. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Concatenated quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.

    1996-07-01

    One main problem for the future of practial quantum computing is to stabilize the computation against unwanted interactions with the environment and imperfections in the applied operations. Existing proposals for quantum memories and quantum channels require gates with asymptotically zero error to store or transmit an input quantum state for arbitrarily long times or distances with fixed error. This report gives a method which has the property that to store or transmit a qubit with maximum error {epsilon} requires gates with errors at most {ital c}{epsilon} and storage or channel elements with error at most {epsilon}, independent of how long we wish to store the state or how far we wish to transmit it. The method relies on using concatenated quantum codes and hierarchically implemented recovery operations. The overhead of the method is polynomial in the time of storage or the distance of the transmission. Rigorous and heuristic lower bounds for the constant {ital c} are given.

  2. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab

  3. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  4. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  5. Stand-alone front-end system for high- frequency, high-frame-rate coded excitation ultrasonic imaging.

    Science.gov (United States)

    Park, Jinhyoung; Hu, Changhong; Shung, K Kirk

    2011-12-01

    A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 V(pp). The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO(3)) single-element lightweight (<;0.28 g) transducers were utilized. The wire target measure- ment showed that the -6-dB axial resolution of a chirp-coded excitation was 50 μm and lateral resolution was 120 μm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation.

  6. Estimating safe maximum levels of vitamins and minerals in fortified foods and food supplements.

    Science.gov (United States)

    Flynn, Albert; Kehoe, Laura; Hennessy, Áine; Walton, Janette

    2017-12-01

    To show how safe maximum levels (SML) of vitamins and minerals in fortified foods and supplements may be estimated in population subgroups. SML were estimated for adults and 7- to 10-year-old children for six nutrients (retinol, vitamins B6, D and E, folic acid, iron and calcium) using data on usual daily nutrient intakes from Irish national nutrition surveys. SML of nutrients in supplements were lower for children than for adults, except for calcium and iron. Daily energy intake from fortified foods in high consumers (95th percentile) varied by nutrient from 138 to 342 kcal in adults and 40-309 kcal in children. SML (/100 kcal) of nutrients in fortified food were lower for children than adults for vitamins B6 and D, higher for vitamin E, with little difference for other nutrients. Including 25 % 'overage' for nutrients in fortified foods and supplements had little effect on SML. Nutritionally significant amounts of these nutrients can be added safely to supplements and fortified foods for these population subgroups. The estimated SML of nutrients in fortified foods and supplements may be considered safe for these population subgroups over the long term given the food composition and dietary patterns prevailing in the respective dietary surveys. This risk assessment approach shows how nutrient intake data may be used to estimate, for population subgroups, the SML for vitamins and minerals in both fortified foods and supplements, separately, each taking into account the intake from other dietary sources.

  7. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  8. Canadian energy standards : residential energy code requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, K. [SAR Engineering Ltd., Burnaby, BC (Canada)

    2006-09-15

    A survey of residential energy code requirements was discussed. New housing is approximately 13 per cent more efficient than housing built 15 years ago, and more stringent energy efficiency requirements in building codes have contributed to decreased energy use and greenhouse gas (GHG) emissions. However, a survey of residential energy codes across Canada has determined that explicit demands for energy efficiency are currently only present in British Columbia (BC), Manitoba, Ontario and Quebec. The survey evaluated more than 4300 single-detached homes built between 2000 and 2005 using data from the EnerGuide for Houses (EGH) database. House area, volume, airtightness and construction characteristics were reviewed to create archetypes for 8 geographic areas. The survey indicated that in Quebec and the Maritimes, 90 per cent of houses comply with ventilation system requirements of the National Building Code, while compliance in the rest of Canada is much lower. Heat recovery ventilation use is predominant in the Atlantic provinces. Direct-vent or condensing furnaces constitute the majority of installed systems in provinces where natural gas is the primary space heating fuel. Details of Insulation levels for walls, double-glazed windows, and building code insulation standards were also reviewed. It was concluded that if R-2000 levels of energy efficiency were applied, total average energy consumption would be reduced by 36 per cent in Canada. 2 tabs.

  9. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  10. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  11. Effective coding with VHDL principles and best practice

    CERN Document Server

    Jasinski, Ricardo

    2016-01-01

    A guide to applying software design principles and coding practices to VHDL to improve the readability, maintainability, and quality of VHDL code. This book addresses an often-neglected aspect of the creation of VHDL designs. A VHDL description is also source code, and VHDL designers can use the best practices of software development to write high-quality code and to organize it in a design. This book presents this unique set of skills, teaching VHDL designers of all experience levels how to apply the best design principles and coding practices from the software world to the world of hardware. The concepts introduced here will help readers write code that is easier to understand and more likely to be correct, with improved readability, maintainability, and overall quality. After a brief review of VHDL, the book presents fundamental design principles for writing code, discussing such topics as design, quality, architecture, modularity, abstraction, and hierarchy. Building on these concepts, the book then int...

  12. Fast decoders for qudit topological codes

    International Nuclear Information System (INIS)

    Anwar, Hussain; Brown, Benjamin J; Campbell, Earl T; Browne, Dan E

    2014-01-01

    Qudit toric codes are a natural higher-dimensional generalization of the well-studied qubit toric code. However, standard methods for error correction of the qubit toric code are not applicable to them. Novel decoders are needed. In this paper we introduce two renormalization group decoders for qudit codes and analyse their error correction thresholds and efficiency. The first decoder is a generalization of a ‘hard-decisions’ decoder due to Bravyi and Haah (arXiv:1112.3252). We modify this decoder to overcome a percolation effect which limits its threshold performance for many-level quantum systems. The second decoder is a generalization of a ‘soft-decisions’ decoder due to Poulin and Duclos-Cianci (2010 Phys. Rev. Lett. 104 050504), with a small cell size to optimize the efficiency of implementation in the high dimensional case. In each case, we estimate thresholds for the uncorrelated bit-flip error model and provide a comparative analysis of the performance of both these approaches to error correction of qudit toric codes. (paper)

  13. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  14. Codon usage and expression level of human mitochondrial 13 protein coding genes across six continents.

    Science.gov (United States)

    Chakraborty, Supriyo; Uddin, Arif; Mazumder, Tarikul Huda; Choudhury, Monisha Nath; Malakar, Arup Kumar; Paul, Prosenjit; Halder, Binata; Deka, Himangshu; Mazumder, Gulshana Akthar; Barbhuiya, Riazul Ahmed; Barbhuiya, Masuk Ahmed; Devi, Warepam Jesmi

    2017-12-02

    The study of codon usage coupled with phylogenetic analysis is an important tool to understand the genetic and evolutionary relationship of a gene. The 13 protein coding genes of human mitochondria are involved in electron transport chain for the generation of energy currency (ATP). However, no work has yet been reported on the codon usage of the mitochondrial protein coding genes across six continents. To understand the patterns of codon usage in mitochondrial genes across six different continents, we used bioinformatic analyses to analyze the protein coding genes. The codon usage bias was low as revealed from high ENC value. Correlation between codon usage and GC3 suggested that all the codons ending with G/C were positively correlated with GC3 but vice versa for A/T ending codons with the exception of ND4L and ND5 genes. Neutrality plot revealed that for the genes ATP6, COI, COIII, CYB, ND4 and ND4L, natural selection might have played a major role while mutation pressure might have played a dominant role in the codon usage bias of ATP8, COII, ND1, ND2, ND3, ND5 and ND6 genes. Phylogenetic analysis indicated that evolutionary relationships in each of 13 protein coding genes of human mitochondria were different across six continents and further suggested that geographical distance was an important factor for the origin and evolution of 13 protein coding genes of human mitochondria. Copyright © 2017 Elsevier B.V. and Mitochondria Research Society. All rights reserved.

  15. Breathing (and Coding?) a Bit Easier: Changes to International Classification of Disease Coding for Pulmonary Hypertension.

    Science.gov (United States)

    Mathai, Stephen C; Mathew, Sherin

    2018-04-20

    International Classification of Disease (ICD) coding system is broadly utilized by healthcare providers, hospitals, healthcare payers, and governments to track health trends and statistics at the global, national, and local levels and to provide a reimbursement framework for medical care based upon diagnosis and severity of illness. The current iteration of the ICD system, ICD-10, was implemented in 2015. While many changes to the prior ICD-9 system were included in the ICD-10 system, the newer revision failed to adequately reflect advances in the clinical classification of certain diseases such as pulmonary hypertension (PH). Recently, a proposal to modify the ICD-10 codes for PH was considered and ultimately adopted for inclusion as updates to ICD-10 coding system. While these revisions better reflect the current clinical classification of PH, in the future, further changes should be considered to improve the accuracy and ease of coding for all forms of PH. Copyright © 2018. Published by Elsevier Inc.

  16. Esophageal function testing: Billing and coding update.

    Science.gov (United States)

    Khan, A; Massey, B; Rao, S; Pandolfino, J

    2018-01-01

    Esophageal function testing is being increasingly utilized in diagnosis and management of esophageal disorders. There have been several recent technological advances in the field to allow practitioners the ability to more accurately assess and treat such conditions, but there has been a relative lack of education in the literature regarding the associated Common Procedural Terminology (CPT) codes and methods of reimbursement. This review, commissioned and supported by the American Neurogastroenterology and Motility Society Council, aims to summarize each of the CPT codes for esophageal function testing and show the trends of associated reimbursement, as well as recommend coding methods in a practical context. We also aim to encourage many of these codes to be reviewed on a gastrointestinal (GI) societal level, by providing evidence of both discrepancies in coding definitions and inadequate reimbursement in this new era of esophageal function testing. © 2017 John Wiley & Sons Ltd.

  17. Graphical user interface development for the MARS code

    International Nuclear Information System (INIS)

    Jeong, J.-J.; Hwang, M.; Lee, Y.J.; Kim, K.D.; Chung, B.D.

    2003-01-01

    KAERI has developed the best-estimate thermal-hydraulic system code MARS using the RELAP5/MOD3 and COBRA-TF codes. To exploit the excellent features of the two codes, we consolidated the two codes. Then, to improve the readability, maintainability, and portability of the consolidated code, all the subroutines were completely restructured by employing a modular data structure. At present, a major part of the MARS code development program is underway to improve the existing capabilities. The code couplings with three-dimensional neutron kinetics, containment analysis, and transient critical heat flux calculations have also been carried out. At the same time, graphical user interface (GUI) tools have been developed for user friendliness. This paper presents the main features of the MARS GUI. The primary objective of the GUI development was to provide a valuable aid for all levels of MARS users in their output interpretation and interactive controls. Especially, an interactive control function was designed to allow operator actions during simulation so that users can utilize the MARS code like conventional nuclear plant analyzers (NPAs). (author)

  18. Parallel and vector implementation of APROS simulator code

    International Nuclear Information System (INIS)

    Niemi, J.; Tommiska, J.

    1990-01-01

    In this paper the vector and parallel processing implementation of a general purpose simulator code is discussed. In this code the utilization of vector processing is straightforward. In addition to the loop level parallel processing, the functional decomposition and the domain decomposition have been considered. Results represented for a PWR-plant simulation illustrate the potential speed-up factors of the alternatives. It turns out that the loop level parallelism and the domain decomposition are the most promising alternative to employ the parallel processing. (author)

  19. Should legislation regarding maximum Pb and Cd levels in human food also cover large game meat?

    Science.gov (United States)

    Taggart, Mark A; Reglero, Manuel M; Camarero, Pablo R; Mateo, Rafael

    2011-01-01

    Game meat may be contaminated with metals and metalloids if animals reside in anthropogenically polluted areas, or if ammunition used to kill the game contaminates the meat. Muscle tissue from red deer and wild boar shot in Ciudad Real province (Spain) in 2005-06 was analysed for As, Pb, Cu, Zn, Se and Cd. Samples were collected from hunting estates within and outside an area that has been historically used for mining, smelting and refining various metals and metalloids. Meat destined for human consumption, contained more Pb, As and Se (red deer) and Pb (boar) when harvested from animals that had resided in mined areas. Age related accumulation of Cd, Zn and As (in deer) and Cd, Cu and Se (in boar) was also observed. Two boar meat samples contained high Pb, at 352 and 2408 μg/g d.w., and these were likely to have been contaminated by Pb ammunition. Likewise, 19-84% of all samples (depending on species and sampling area) had Pb levels > 0.1 μg/g w.w., the EU maximum residue level (MRL) for farm reared meat. Between 9 and 43% of samples exceeded comparable Cd limits. Such data highlight a discrepancy between what is considered safe for human consumption in popular farmed meat (chicken, beef, lamb), and what in game may often exist. A risk assessment is presented which describes the number of meals required to exceed current tolerable weekly intakes (PTWIs) for Pb and Cd, and the potential contribution of large game consumption to such intake limit criteria. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Radiation protection code of practice in academic and research institutes

    International Nuclear Information System (INIS)

    Abdalla, A. A. M.

    2010-05-01

    The main aim of this study was to establish a code of practice on radiation protection for safe control of radiation sources used in academic and research institutes, another aim of this study was to assess the current situation of radiation protection in some of the academic and research institutes.To achieve the aims of this study, a draft of a code of practice has been developed which is based on international and local relevant recommendation. The developed code includes the following main issues: regulatory responsibilities, radiation protection program and design of radiation installations. The second aim had been accomplished by conducting inspection visits to five (A, B, C, D and E) academic and to four (F, G, H and I ) research institutes. Eight of such institutes are located in Khartoum State and the ninth one is in Madani city (Aljazeera State). The inspection activities have been carried out using a standard inspection check list developed by the regulatory authority of the Sudan. The inspection missions to the above mentioned institutes involved also evaluation of radiation levels around the premises and storage areas of radiation sources. The dose rate measurement around radiation sources locations were found to be quite low. This mainly is due to the fact that the activities of most radionuclides that are used in these institutes are quite low ( in the range of micro curies). Also ,most the x-ray machines that were found in use for scientific academic and research purposes work at low k Vp of maximum 60 k Vp. None of the radiation workers in the inspected institutes has a personal radiation monitoring device, therefor staff dose levels have not been assessed. However it was noted that in most of the academic/ research studies radiation workers are only exposed to very low levels of radiation and for a very short time that dose not exceed 1 minute, therefore the expected occupational exposure of the staff is very low. Radiation measurement in public

  1. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  2. Intracoin - International Nuclide Transport Code Intercomparison Study

    International Nuclear Information System (INIS)

    1984-09-01

    The purpose of the project is to obtain improved knowledge of the influence of various strategies for radionuclide transport modelling for the safety assessment of final repositories for nuclear waste. This is a report of the first phase of the project which was devoted to a comparison of the numerical accuracy of the computer codes used in the study. The codes can be divided into five groups, namely advection-dispersion models, models including matrix diffusion and chemical effects and finally combined models. The results are presented as comparisons of calculations since the objective of level 1 was code verification. (G.B.)

  3. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  4. An object-oriented scripting interface to a legacy electronic structure code

    DEFF Research Database (Denmark)

    Bahn, Sune Rastad; Jacobsen, Karsten Wedel

    2002-01-01

    The authors have created an object-oriented scripting interface to a mature density functional theory code. The interface gives users a high-level, flexible handle on the code without rewriting the underlying number-crunching code. The authors also discuss design issues and the advantages...

  5. A conductance maximum observed in an inward-rectifier potassium channel

    OpenAIRE

    1994-01-01

    One prediction of a multi-ion pore is that its conductance should reach a maximum and then begin to decrease as the concentration of permeant ion is raised equally on both sides of the membrane. A conductance maximum has been observed at the single-channel level in gramicidin and in a Ca(2+)-activated K+ channel at extremely high ion concentration (> 1,000 mM) (Hladky, S. B., and D. A. Haydon. 1972. Biochimica et Biophysica Acta. 274:294-312; Eisenmam, G., J. Sandblom, and E. Neher. 1977. In ...

  6. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  7. GAMUT: A computer code for γ-ray energy and intensity analysis

    International Nuclear Information System (INIS)

    Firestone, R.B.

    1991-05-01

    GAMUT is a computer code to analyze γ-ray energies and intensities. It does a linear least-squares fit of measured γ-ray energies from one or more experiments to the level scheme. GAMUT also performs a non-linear least-squares analysis of branching intensities. For both energy and intensity data, a statistical Chi-square analysis is performed with an iterative uncertainty adjustment. The uncertainties of outlying measured values and sets of measurements with x 2 /f>1 are increased, and the calculation is repeated until the uncertainties are consistent with the fitted values. GAMUT accepts input from standard or special-format ENSDF data sets. The special-format ENSDF data sets were designed to permit analysis of more than one set of measurements associated with a single ENSDF data set. GAMUT prepares a standard ENSDF format output data set containing the adjusted values. If more than one input ENSDF data set is provided, GAMUT creates an ADOPTED LEVELS, GAMMAS data set containing the adjusted level and γ-ray energies and branching intensities from each level normalized to 100 for the strongest γ-ray. GAMUT also provides a summary of the results and an extensive log of the iterative analysis. GAMUT is interactive prompting the user for input and output file names and for default calculation options. This version of GAMUT has adjustable dimensions so that any maximum number of data sets, levels, and γ-rays can be established at the time of implementation. 6 refs

  8. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  9. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  10. The Relationship Between Maximum Isometric Strength and Ball Velocity in the Tennis Serve

    Directory of Open Access Journals (Sweden)

    Baiget Ernest

    2016-12-01

    Full Text Available The aims of this study were to analyze the relationship between maximum isometric strength levels in different upper and lower limb joints and serve velocity in competitive tennis players as well as to develop a prediction model based on this information. Twelve male competitive tennis players (mean ± SD; age: 17.2 ± 1.0 years; body height: 180.1 ± 6.2 cm; body mass: 71.9 ± 5.6 kg were tested using maximum isometric strength levels (i.e., wrist, elbow and shoulder flexion and extension; leg and back extension; shoulder external and internal rotation. Serve velocity was measured using a radar gun. Results showed a strong positive relationship between serve velocity and shoulder internal rotation (r = 0.67; p < 0.05. Low to moderate correlations were also found between serve velocity and wrist, elbow and shoulder flexion – extension, leg and back extension and shoulder external rotation (r = 0.36 – 0.53; p = 0.377 – 0.054. Bivariate and multivariate models for predicting serve velocity were developed, with shoulder flexion and internal rotation explaining 55% of the variance in serve velocity (r = 0.74; p < 0.001. The maximum isometric strength level in shoulder internal rotation was strongly related to serve velocity, and a large part of the variability in serve velocity was explained by the maximum isometric strength levels in shoulder internal rotation and shoulder flexion.

  11. TORBEAM 2.0, a paraxial beam tracing code for electron-cyclotron beams in fusion plasmas for extended physics applications

    Science.gov (United States)

    Poli, E.; Bock, A.; Lochbrunner, M.; Maj, O.; Reich, M.; Snicker, A.; Stegmeir, A.; Volpe, F.; Bertelli, N.; Bilato, R.; Conway, G. D.; Farina, D.; Felici, F.; Figini, L.; Fischer, R.; Galperti, C.; Happel, T.; Lin-Liu, Y. R.; Marushchenko, N. B.; Mszanowski, U.; Poli, F. M.; Stober, J.; Westerhof, E.; Zille, R.; Peeters, A. G.; Pereverzev, G. V.

    2018-04-01

    The paraxial WKB code TORBEAM (Poli, 2001) is widely used for the description of electron-cyclotron waves in fusion plasmas, retaining diffraction effects through the solution of a set of ordinary differential equations. With respect to its original form, the code has undergone significant transformations and extensions, in terms of both the physical model and the spectrum of applications. The code has been rewritten in Fortran 90 and transformed into a library, which can be called from within different (not necessarily Fortran-based) workflows. The models for both absorption and current drive have been extended, including e.g. fully-relativistic calculation of the absorption coefficient, momentum conservation in electron-electron collisions and the contribution of more than one harmonic to current drive. The code can be run also for reflectometry applications, with relativistic corrections for the electron mass. Formulas that provide the coupling between the reflected beam and the receiver have been developed. Accelerated versions of the code are available, with the reduced physics goal of inferring the location of maximum absorption (including or not the total driven current) for a given setting of the launcher mirrors. Optionally, plasma volumes within given flux surfaces and corresponding values of minimum and maximum magnetic field can be provided externally to speed up the calculation of full driven-current profiles. These can be employed in real-time control algorithms or for fast data analysis.

  12. [Comparative review of the Senegalese and French deontology codes].

    Science.gov (United States)

    Soumah, M; Mbaye, I; Bah, H; Gaye Fall, M C; Sow, M L

    2005-01-01

    The medical deontology regroups duties of the physicians and regulate the exercise of medicine. The code of medical deontology of Senegal inspired of the French medical deontology code, has not been revised since its institution whereas the French deontology code knew three revisions. Comparing the two codes of deontology titles by title and article by article, this work beyond a parallel between the two codes puts in inscription the progress in bioethics that are to the basis of the revisions of the French medical deontology code. This article will permit an advocacy of the health professionals, in favor of a setting to level of the of Senegalese medical deontology code. Because legal litigation, that is important in the developed countries, intensify in our developing countries. It is inherent to the technological progress and to the awareness of the patients of their rights.

  13. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  14. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  15. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  16. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  17. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  18. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  19. A local maximum in gibberellin levels regulates maize leaf growth by spatial control of cell division.

    Science.gov (United States)

    Nelissen, Hilde; Rymen, Bart; Jikumaru, Yusuke; Demuynck, Kirin; Van Lijsebettens, Mieke; Kamiya, Yuji; Inzé, Dirk; Beemster, Gerrit T S

    2012-07-10

    Plant growth rate is largely determined by the transition between the successive phases of cell division and expansion. A key role for hormone signaling in determining this transition was inferred from genetic approaches and transcriptome analysis in the Arabidopsis root tip. We used the developmental gradient at the maize leaf base as a model to study this transition, because it allows a direct comparison between endogenous hormone concentrations and the transitions between dividing, expanding, and mature tissue. Concentrations of auxin and cytokinins are highest in dividing tissues, whereas bioactive gibberellins (GAs) show a peak at the transition zone between the division and expansion zone. Combined metabolic and transcriptomic profiling revealed that this GA maximum is established by GA biosynthesis in the division zone (DZ) and active GA catabolism at the onset of the expansion zone. Mutants defective in GA synthesis and signaling, and transgenic plants overproducing GAs, demonstrate that altering GA levels specifically affects the size of the DZ, resulting in proportional changes in organ growth rates. This work thereby provides a novel molecular mechanism for the regulation of the transition from cell division to expansion that controls organ growth and size. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  1. Code of Conduct for wind-power projects - Feasibility study; Code of Conduct fuer windkraftprojekte. Machbarkeitsstudie - Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Strub, P. [Pierre Strub, freischaffender Berater, Binningen (Switzerland); Ziegler, Ch. [Inter Act, Basel (Switzerland)

    2009-02-15

    This final report deals with the results of a feasibility study concerning the development of a Code of Conduct for wind-power projects. The aim is to strengthen the acceptance of wind-power by the general public. The necessity of new, voluntary market instruments is discussed. The urgency of development in this area is quoted as being high, and the authors consider the feasibility of the definition of a code of conduct as being proven. The code of conduct can, according to the authors, be of use at various levels but primarily in project development. Further free-enterprise instruments are also suggested that should help support socially compatible and successful market development. It is noted that the predominant portion of those questioned are prepared to co-operate in further work on the subject

  2. Analysis of ATLAS FLB-EC6 Experiment using SPACE Code

    International Nuclear Information System (INIS)

    Lee, Donghyuk; Kim, Yohan; Kim, Seyun

    2013-01-01

    The new code is named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). As a part of code validation effort, simulation of ATLAS FLB(Feedwater Line Break) experiment using SPACE code has been performed. The FLB-EC6 experiment is economizer break of a main feedwater line. The calculated results using the SPACE code are compared with those from the experiment. The ATLAS FLB-EC6 experiment, which is economizer feedwater line break, was simulated using the SPACE code. The calculated results were compared with those from the experiment. The comparisons of break flow rate and steam generator water level show good agreement with the experiment. The SPACE code is capable of predicting physical phenomena occurring during ATLAS FLB-EC6 experiment

  3. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  4. Coding Bootcamps : Building Future-Proof Skills through Rapid Skills Training

    OpenAIRE

    World Bank

    2017-01-01

    This report studies coding bootcamps. A new kind of rapid skills training program for the digital age. Coding bootcamps are typically short-term (three to six months), intensive and applied training courses provided by a third party that crowdsources the demand for low-skills tech talent. Coding bootcamps aim at low-entry level tech employability (for example, junior developer), providing a ...

  5. More Than Bar Codes: Integrating Global Standards-Based Bar Code Technology Into National Health Information Systems in Ethiopia and Pakistan to Increase End-to-End Supply Chain Visibility.

    Science.gov (United States)

    Hara, Liuichi; Guirguis, Ramy; Hummel, Keith; Villanueva, Monica

    2017-12-28

    The United Nations Population Fund (UNFPA) and the United States Agency for International Development (USAID) DELIVER PROJECT work together to strengthen public health commodity supply chains by standardizing bar coding under a single set of global standards. From 2015, UNFPA and USAID collaborated to pilot test how tracking and tracing of bar coded health products could be operationalized in the public health supply chains of Ethiopia and Pakistan and inform the ecosystem needed to begin full implementation. Pakistan had been using proprietary bar codes for inventory management of contraceptive supplies but transitioned to global standards-based bar codes during the pilot. The transition allowed Pakistan to leverage the original bar codes that were preprinted by global manufacturers as opposed to printing new bar codes at the central warehouse. However, barriers at lower service delivery levels prevented full realization of end-to-end data visibility. Key barriers at the district level were the lack of a digital inventory management system and absence of bar codes at the primary-level packaging level, such as single blister packs. The team in Ethiopia developed an open-sourced smartphone application that allowed the team to scan bar codes using the mobile phone's camera and to push the captured data to the country's data mart. Real-time tracking and tracing occurred from the central warehouse to the Addis Ababa distribution hub and to 2 health centers. These pilots demonstrated that standardized product identification and bar codes can significantly improve accuracy over manual stock counts while significantly streamlining the stock-taking process, resulting in efficiencies. The pilots also showed that bar coding technology by itself is not sufficient to ensure data visibility. Rather, by using global standards for identification and data capture of pharmaceuticals and medical devices, and integrating the data captured into national and global tracking systems

  6. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  7. Holonomic surface codes for fault-tolerant quantum computation

    Science.gov (United States)

    Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco

    2018-02-01

    Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.

  8. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  9. Development of MATRA-LMR code α-version for LMR subchannel analysis

    International Nuclear Information System (INIS)

    Kim, Won Seok; Kim, Young Gyun; Kim, Young Gin

    1998-05-01

    Since the sodium boiling point is very high, maximum cladding and pin temperature are used for design limit condition in sodium cooled liquid metal reactor. It is necessary to predict accurately the core temperature distribution to increase the sodium coolant efficiency. Based on the MATRA code, which is developed for PWR analysis, MATRA-LMR is being developed for LMR. The major modification are as follows : A) The sodium properties table is implemented as subprogram in the code. B) Heat transfer coefficients are changed for LMR C) The pressure drop correlations are changed for more accurate calculations, which are Novendstern, Chiu-Rohsenow-Todreas, and Cheng-Todreas correlations. To assess the development status of MATRA-LMR code, calculations have been performed for ORNL 19 pin and EBR-II 61 pin tests. MATRA-LMR calculation results are also compared with the results obtained by the ALTHEN code, which uses more simplied thermal hydraulic model. The MATRA-LMR predictions are found to agree well to the measured values. The differences in results between MATRA-LMR and SLTHEN have occurred because SLTHEN code uses the very simplied thermal-hydraulic model to reduce computing time. MATRA-LMR can be used only for single assembly analysis, but it is planned to extend for multi-assembly calculation. (author). 18 refs., 8 tabs., 14 figs

  10. Gender codes why women are leaving computing

    CERN Document Server

    Misa, Thomas J

    2010-01-01

    The computing profession is facing a serious gender crisis. Women are abandoning the computing field at an alarming rate. Fewer are entering the profession than anytime in the past twenty-five years, while too many are leaving the field in mid-career. With a maximum of insight and a minimum of jargon, Gender Codes explains the complex social and cultural processes at work in gender and computing today. Edited by Thomas Misa and featuring a Foreword by Linda Shafer, Chair of the IEEE Computer Society Press, this insightful collection of essays explores the persisting gender imbalance in computing and presents a clear course of action for turning things around.

  11. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  12. Probable relationship between partitions of the set of codons and the origin of the genetic code.

    Science.gov (United States)

    Salinas, Dino G; Gallardo, Mauricio O; Osorio, Manuel I

    2014-03-01

    Here we study the distribution of randomly generated partitions of the set of amino acid-coding codons. Some results are an application from a previous work, about the Stirling numbers of the second kind and triplet codes, both to the cases of triplet codes having four stop codons, as in mammalian mitochondrial genetic code, and hypothetical doublet codes. Extending previous results, in this work it is found that the most probable number of blocks of synonymous codons, in a genetic code, is similar to the number of amino acids when there are four stop codons, as well as it could be for a primigenious doublet code. Also it is studied the integer partitions associated to patterns of synonymous codons and it is shown, for the canonical code, that the standard deviation inside an integer partition is one of the most probable. We think that, in some early epoch, the genetic code might have had a maximum of the disorder or entropy, independent of the assignment between codons and amino acids, reaching a state similar to "code freeze" proposed by Francis Crick. In later stages, maybe deterministic rules have reassigned codons to amino acids, forming the natural codes, such as the canonical code, but keeping the numerical features describing the set partitions and the integer partitions, like a "fossil numbers"; both kinds of partitions about the set of amino acid-coding codons. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  15. Dependency of maximum goitrogenic response on some minimal level of thyroid hormone production

    International Nuclear Information System (INIS)

    March, B.E.; Poon, R.

    1981-01-01

    Thyroidal activity was studied in chicks given dietary thiouracil in conjunction with daily doses of thyroxine and with diets adequate and deficient in iodine. DL-thyroxine administered at doses up to 1.0 microgram per day for 10 to 12 days had no effect or slightly increased thyroid weight. Both the epithelial and colloid components of the thyroid gland were increased in response to thiouracil and to thiouracil in combination with low dosages of exogenous thyroxine. Radioiodine uptake was increased above the control with thiouracil and with thiouracil in conjunction with .5 and 1.0 microgram DL-thyroxine given daily. Birds receiving thiouracil, with and without exogenous thyroxine, showed a different pattern of radioiodine uptake and release than the control birds. Thiouracil-treated birds showed a rapid uptake of iodine following its administration, which was followed by a rapid decline immediately after peak accumulation, whereas in control birds thyroidal radioiodine concentration reached a plateau at the maximum concentration attained. The goitrogenic response to thiouracil was much greater when the diet was supplemented with iodine than when the diet was iodine-deficient. Thyroids under iodine deficiency contained greater percentages of epithelial tissue than with iodine-supplemented diets. Thyroid glands of chicks given thiouracil in an iodine-supplemented diet contained much more colloid than glands from iodine-deficient chicks with or without thiouracil. DL-thyroxine at a dosage of .5 microgram per day to chicks given thiouracil in an iodine-adequate diet increased, whereas higher dosages decreased thyroidal colloid. It is concluded that some minimal concentration of thyroid hormone is required for maximum goitrogenic response. It is not clear whether the response is entirely due to an effect on thyrotropin production or whether there is an effect of thyroid hormone on the thyroid gland itself

  16. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  17. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  19. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  20. Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2013-10-01

    In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.