WorldWideScience

Sample records for sampling mcmis-ds code

  1. Mammographic Imaging Studies Using the Monte Carlo Image Simulation-Differential Sampling (MCMIS-DS) Code

    International Nuclear Information System (INIS)

    Kuruvilla Verghese

    2002-01-01

    This report summarizes the highlights of the research performed under the 1-year NEER grant from the Department of Energy. The primary goal of this study was to investigate the effects of certain design changes in the Fisher Senoscan mammography system and in the degree of breast compression on the discernability of microcalcifications in calcification clusters often observed in mammograms with tumor lesions. The most important design change that one can contemplate in a digital mammography system to improve resolution of calcifications is the reduction of pixel dimensions of the digital detector. Breast compression is painful to the patient and is though to be a deterrent to women to get routine mammographic screening. Calcification clusters often serve as markers (indicators ) of breast cancer

  2. A Consistent System for Coding Laboratory Samples

    Science.gov (United States)

    Sih, John C.

    1996-07-01

    A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.

  3. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  4. Sample test cases using the environmental computer code NECTAR

    International Nuclear Information System (INIS)

    Ponting, A.C.

    1984-06-01

    This note demonstrates a few of the many different ways in which the environmental computer code NECTAR may be used. Four sample test cases are presented and described to show how NECTAR input data are structured. Edited output is also presented to illustrate the format of the results. Two test cases demonstrate how NECTAR may be used to study radio-isotopes not explicitly included in the code. (U.K.)

  5. A GPU code for analytic continuation through a sampling method

    Directory of Open Access Journals (Sweden)

    Johan Nordström

    2016-01-01

    Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  6. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  7. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    International Nuclear Information System (INIS)

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  8. Sample problem manual for benchmarking of cask analysis codes

    International Nuclear Information System (INIS)

    Glass, R.E.

    1988-02-01

    A series of problems have been defined to evaluate structural and thermal codes. These problems were designed to simulate the hypothetical accident conditions given in Title 10 of the Code of Federal Regulation, Part 71 (10CFR71) while retaining simple geometries. This produced a problem set that exercises the ability of the codes to model pertinent physical phenomena without requiring extensive use of computer resources. The solutions that are presented are consensus solutions based on computer analyses done by both national laboratories and industry in the United States, United Kingdom, France, Italy, Sweden, and Japan. The intent of this manual is to provide code users with a set of standard structural and thermal problems and solutions which can be used to evaluate individual codes. 19 refs., 19 figs., 14 tabs

  9. Automated bar coding of air samples at Hanford (ABCASH)

    International Nuclear Information System (INIS)

    Troyer, G.L.; Brayton, D.D.; McNeece, S.G.

    1992-10-01

    This article describes the basis, main features and benefits of an automated system for tracking and reporting radioactive air particulate samples. The system was developed due to recognized need for improving the quality and integrity of air sample data related to personnel and environmental protection. The data capture, storage, and retrieval of air sample data are described. The automation aspect of the associated and data input eliminates a large potential for human error. The system utilizes personal computers, handheld computers, a commercial personal computer database package, commercial programming languages, and complete documentation to satisfy the system's automation objective

  10. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    Science.gov (United States)

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  11. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  12. Application of bar codes to the automation of analytical sample data collection

    International Nuclear Information System (INIS)

    Jurgensen, H.A.

    1986-01-01

    The Health Protection Department at the Savannah River Plant collects 500 urine samples per day for tritium analyses. Prior to automation, all sample information was compiled manually. Bar code technology was chosen for automating this program because it provides a more accurate, efficient, and inexpensive method for data entry. The system has three major functions: sample labeling is accomplished at remote bar code label stations composed of an Intermec 8220 (Intermec Corp.) interfaced to an IBM-PC, data collection is done on a central VAX 11/730 (Digital Equipment Corp.). Bar code readers are used to log-in samples to be analyzed on liquid scintillation counters. The VAX 11/730 processes the data and generates reports, data storage is on the VAX 11/730 and backed up on the plant's central computer. A brief description of several other bar code applications at the Savannah River Plant is also presented

  13. Code Betal to calculation Alpha/Beta activities in environmental samples

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs

  14. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  15. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  16. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  17. Simulative Investigation on Spectral Efficiency of Unipolar Codes based OCDMA System using Importance Sampling Technique

    Science.gov (United States)

    Farhat, A.; Menif, M.; Rezig, H.

    2013-09-01

    This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.

  18. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  19. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  20. Correlated sampling added to the specific purpose Monte Carlo code McPNL for neutron lifetime log responses

    International Nuclear Information System (INIS)

    Mickael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    The specific purpose neutron lifetime oil well logging simulation code, McPNL, has been rewritten for greater user-friendliness and faster execution. Correlated sampling has been added to the code to enable studies of relative changes in the tool response caused by environmental changes. The absolute responses calculated by the code have been benchmarked against laboratory test pit data. The relative responses from correlated sampling are not directly benchmarked, but they are validated using experimental and theoretical results

  1. Towards an Integrated QR Code Biosensor: Light-Driven Sample Acquisition and Bacterial Cellulose Paper Substrate.

    Science.gov (United States)

    Yuan, Mingquan; Jiang, Qisheng; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu

    2018-06-01

    This paper addresses two key challenges toward an integrated forward error-correcting biosensor based on our previously reported self-assembled quick-response (QR) code. The first challenge involves the choice of the paper substrate for printing and self-assembling the QR code. We have compared four different substrates that includes regular printing paper, Whatman filter paper, nitrocellulose membrane and lab synthesized bacterial cellulose. We report that out of the four substrates bacterial cellulose outperforms the others in terms of probe (gold nanorods) and ink retention capability. The second challenge involves remote activation of the analyte sampling and the QR code self-assembly process. In this paper, we use light as a trigger signal and a graphite layer as a light-absorbing material. The resulting change in temperature due to infrared absorption leads to a temperature gradient that then exerts a diffusive force driving the analyte toward the regions of self-assembly. The working principle has been verified in this paper using assembled biosensor prototypes where we demonstrate higher sample flow rate due to light induced thermal gradients.

  2. Image and Dose Simulation in Support of New Imaging Modalities

    International Nuclear Information System (INIS)

    Kuruvilla Verghese

    2002-01-01

    This report summarizes the highlights of the research performed under the 2-year NEER grant from the Department of Energy. The primary outcome of the work was a new Monte Carlo code, MCMIS-DS, for Monte Carlo for Mammography Image Simulation including Differential Sampling. The code was written to generate simulated images and dose distributions from two different new digital x-ray imaging modalities, namely, synchrotron imaging (SI) and a slot geometry digital mammography system called Fisher Senoscan. A differential sampling scheme was added to the code to generate multiple images that included variations in the parameters of the measurement system and the object in a single execution of the code. The code is to serve multiple purposes; (1) to answer questions regarding the contribution of scattered photons to images, (2) for use in design optimization studies, and (3) to do up to second-order perturbation studies to assess the effects of design parameter variations and/or physical parameters of the object (the breast) without having to re-run the code for each set of varied parameters. The accuracy and fidelity of the code were validated by a large variety of benchmark studies using published data and also using experimental results from mammography phantoms on both imaging modalities

  3. A code for quantitative analysis of light elements in thick samples by PIGE

    International Nuclear Information System (INIS)

    Mateus, R.; Jesus, A.P.; Ribeiro, J.P.

    2005-01-01

    This work presents a code developed for the quantitative analysis of light elements in thick samples by PIGE. The new method avoids the use of standards in the analysis, using a formalism similar to the one used for PIXE analysis, where the excitation function of the nuclear reaction related to the gamma-ray emission is integrated along the depth of the sample. In order to check the validity of the code, we present results for the analysis of Lithium, Boron, Fluorine and Sodium in thick samples. For this purpose, the experimental values of the excitation functions of the reactions 7 Li(p,p'γ) 7 Li, 10 B(p,αγ) 7 Be, 19 F(p,p'γ) 19 F and 23 Na(p,p'γ) 23 Na were used as input. For stopping power cross-sections calculations the semi-empirical equations of Ziegler et al. and the Bragg's rule were used. Agreement between the experimental and the calculated gamma-ray yields was always better than 7.5%

  4. Importance sampling implemented in the code PRIZMA for deep penetration and detection problems in reactor physics

    International Nuclear Information System (INIS)

    Kandiev, Y.Z.; Zatsepin, O.V.

    2013-01-01

    At RFNC-VNIITF, the PRIZMA code which has been developed for more than 30 years, is used to model radiation transport by the Monte Carlo method. The code implements individual and coupled tracking of neutrons, photons, electrons, positrons and ions in one dimensional (1D), 2D or 3D geometry. Attendance estimators are used for tallying, i.e., the estimators whose scores are only nonzero from particles which cross a region or surface of interest. Importance sampling is used to make deep penetration and detection calculations more effective. However, its application to reactor analysis appeared peculiar and required further development. The paper reviews methods used for deep penetration and detection calculations by PRIZMA. It describes in what these calculations differ when applied to reactor analysis and how we compute approximated importance functions and parameters for biased distributions. Methods to control the statistical weight of particles are also discussed. A number of test and applied calculations which were done for the purpose of verification are provided. They are shown to agree either with asymptotic solutions if exist, or with results of analog calculations or predictions by other codes. The applied calculations include the estimation of ex-core detector response from neutron sources arranged in the core, and the estimation of in-core detector response. (authors)

  5. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J.

    2007-03-01

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future

  6. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  7. CERPI and CEREL, two computer codes for the automatic identification and determination of gamma emitters in thermal neutron activated samples

    International Nuclear Information System (INIS)

    Giannini, M.; Oliva, P.R.; Ramorino, C.

    1978-01-01

    A description is given of a computer code which automatically analyses gamma-ray spectra obtained with Ge(Li) detectors. The program contains features as automatic peak location and fitting, determination of peak energies and intensities, nuclide identification and calculation of masses and errors. Finally the results obtained with our computer code for a lunar sample are reported and briefly discussed

  8. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  9. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    International Nuclear Information System (INIS)

    Zhu, T.

    2015-01-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ_e_f_f sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  10. Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, T.

    2015-07-01

    Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ{sub eff} sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same

  11. Computer code ENDSAM for random sampling and validation of the resonance parameters covariance matrices of some major nuclear data libraries

    International Nuclear Information System (INIS)

    Plevnik, Lucijan; Žerovnik, Gašper

    2016-01-01

    Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.

  12. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  13. Neural correlates of sample-coding and reward-coding in the delay activity of neurons in the entopallium and nidopallium caudolaterale of pigeons (Columba livia).

    Science.gov (United States)

    Johnston, Melissa; Anderson, Catrona; Colombo, Michael

    2017-01-15

    We recorded neuronal activity from the nidopallium caudolaterale, the avian equivalent of mammalian prefrontal cortex, and the entopallium, the avian equivalent of the mammalian visual cortex, in four birds trained on a differential outcomes delayed matching-to-sample procedure in which one sample stimulus was followed by reward and the other was not. Despite similar incidence of reward-specific and reward-unspecific delay cell types across the two areas, overall entopallium delay activity occurred following both rewarded and non-rewarded stimuli, whereas nidopallium caudolaterale delay activity tended to occur following the rewarded stimulus but not the non-rewarded stimulus. These findings are consistent with the view that delay activity in entopallium represents a code of the sample stimulus whereas delay activity in nidopallium caudolaterale represents a code of the possibility of an upcoming reward. However, based on the types of delay cells encountered, cells in NCL also code the sample stimulus and cells in ENTO are influenced by reward. We conclude that both areas support the retention of information, but that the activity in each area is differentially modulated by factors such as reward and attentional mechanisms. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. K0-PGNAA of pollutants in aqueous samples using MCNP code

    International Nuclear Information System (INIS)

    Hamid, A.; Shahbunder, H.

    2014-01-01

    Prompt γ-neutron activation analysis (PGNAA) using the k 0 method by employing the 1951.1 keV γ-line of the 35 Cl(n, γ) 36 Cl thermal neutron reaction as monostandard comparator was described. The method has been applied and evaluated using the anti-Compton prompt γ-ray neutron activation analysis facility using 252 Cf neutron source with a neutron flux of 6.16·10 6 n· cm -2 · s -1 . A well-type HPGe detector as the main detector surrounded by NaI(Tl) guard detector has been arranged to investigate the performance of the Compton suppression spectrometer using the simplified slow circuit. The properties of neutron flux were determined by MCNP code calculations. In order to determine the efficiency curve of an HPGe detector, the prompt γ-rays from chlorine were used and an exponential curve was fitted. AC-PGNAA method has been used for the determination of high neutron absorbing elements like Cd, Sm and Gd as well as 20 light and heavy elements (Na, Mg, Al, Si, P, K, Ca, Ti, V, Mn, Sc, Fe, Co, Zn, La, Rb, Cs, As and Th) in standard reference materials (IAEA, Soil-7) and ten sediment samples collected from El-Manzala lake in northern part of Egypt. The reference material IAEA, Soil-7 was analyzed for data validation and good agreement between the experimental values and the certified values have been obtained

  15. k0-PGNAA of pollutants in aqueous samples using MCNP code

    Directory of Open Access Journals (Sweden)

    A. Hamid

    2014-03-01

    Full Text Available Prompt γ-neutron activation analysis (PGNAA using the k0 method by employing the 1951.1 keV γ-line of the 35Cl(n, γ36Cl thermal neutron reaction as monostandard comparator was described. The method has been applied and evaluated using the anti-Compton prompt γ-ray neutron activation analysis facility using 252Cf neutron source with a neutron flux of 6.16 · 106 n · cm-2 · s-1. A well-type HPGe detector as the main detector surrounded by NaI(Tl guard detector has been arranged to investigate the performance of the Compton suppression spectrometer using the simplified slow circuit. The properties of neutron flux were determined by MCNP code calculations. In order to determine the efficiency curve of an HPGe detector, the prompt γ-rays from chlorine were used and an exponential curve was fitted. AC-PGNAA method has been used for the determination of high neutron absorbing elements like Cd, Sm and Gd as well as 20 light and heavy elements (Na, Mg, Al, Si, P, K, Ca, Ti, V, Mn, Sc, Fe, Co, Zn, La, Rb, Cs, As and Th in standard reference materials (IAEA, Soil-7 and ten sediment samples collected from El-Manzala lake in northern part of Egypt. The reference material IAEA, Soil-7 was analyzed for data validation and good agreement between the experimental values and the certified values have been obtained.

  16. FEMSYN - a code system to solve multigroup diffusion theory equations using a variety of solution techniques. Part 1 : Description of code system - input and sample problems

    International Nuclear Information System (INIS)

    Jagannathan, V.

    1985-01-01

    A modular computer code system called FEMSYN has been developed to solve the multigroup diffusion theory equations. The various methods that are incorporated in FEMSYN are (i) finite difference method (FDM) (ii) finite element method (FEM) and (iii) single channel flux synthesis method (SCFS). These methods are described in detail in parts II, III and IV of the present report. In this report, a comparison of the accuracy and the speed of different methods of solution for some benchmark problems are reported. The input preparation and listing of sample input and output are included in the Appendices. The code FEMSYN has been used to solve a wide variety of reactor core problems. It can be used for both LWR and PHWR applications. (author)

  17. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  18. Appraisal of the PREP, KITT, and SAMPLE computer codes for the evaluation of the reliability characteristics of engineered systems

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, P; White, R F

    1976-01-01

    For the probabilistic approach to reactor safety assessment by the use of event tree and fault tree techniques it is essential to be able to estimate the probabilities of failure of the various engineered safety features provided to mitigate the effects of postulated accident sequences. The PREP, KITT and SAMPLE computer codes, which incorporate Kinetic Tree Theory, perform these calculations and have been used extensively to evaluate the reliability characteristics of engineered safety features of American nuclear reactors. Working versions of these computer codes are now available in SRD, and this report explains the merits, capabilities and ease of application of the PREP, KITT, and SAMPLE programs for the solution of system reliability problems.

  19. Composition calculations by the KARATE code system for the spent-fuel samples from the Novovoronezh reactor

    International Nuclear Information System (INIS)

    Hordosy, G.

    2006-01-01

    KARATE is a code system developed in KFKI AERI. It is routinely used for core calculation. Its depletion module are now tested against the radiochemical measurements of spent fuel samples from the Novovoronezh Unit IV, performed in RIAR, Dimitrovgrad. Due to the insufficient knowledge of operational history of the unit, the irradiation history of the samples was taken from formerly published Russian calculations. The calculation of isotopic composition was performed by the MULTICEL module of program system. The agreement between the calculated and measured values of the concentration of the most important actinides and fission products is investigated (Authors)

  20. Energy Code Compliance in a Detailed Commercial Building Sample: The Effects of Missing Data

    Energy Technology Data Exchange (ETDEWEB)

    Biyani, Rahul K.; Richman, Eric E.

    2003-09-30

    Most commercial buildings in the U.S. are required by State or local jurisdiction to meet energy standards. The enforcement of these standards is not well known and building practice without them on a national scale is also little understood. To provide an understanding of these issues, a database has been developed at PNNL that includes detailed energy related building characteristics of 162 commercial buildings from across the country. For this analysis, the COMcheck? compliance software (developed at PNNL) was used to assess compliance with energy codes among these buildings. Data from the database for each building provided the program input with percentage energy compliance to the ASHRAE/IESNA Standard 90.1-1999 energy as the output. During the data input process it was discovered that some essential data for showing compliance of the building envelope was missed and defaults had to be developed to provide complete compliance information. This need for defaults for some data inputs raised the question of what the effect on documenting compliance could be due to missing data. To help answer this question a data collection effort was completed to assess potential differences. Using the program Dodge View, as much of the missing envelope data as possible was collected from the building plans and the database input was again run through COMcheck?. The outputs of both compliance runs were compared to see if the missing data would have adversely affected the results. Both of these results provided a percentage compliance of each building in the envelope and lighting categories, showing by how large a percentage each building either met or fell short of the ASHRAE/IESNA Standard 90.1-1999 energy code. The results of the compliance runs showed that 57.7 % of the buildings met or exceeded envelope requirements with defaults and that 68 % met or exceeded envelope requirements with the actual data. Also, 53.6 % of the buildings met or surpassed the lighting requirements

  1. CERPI and CEREL, two computer codes for the automatic identification and determination of gamma emitters in thermal-neutron-activated samples

    International Nuclear Information System (INIS)

    Giannini, M.; Oliva, P.R.; Ramorino, M.C.

    1979-01-01

    A computer code that automatically analyzes gamma-ray spectra obtained with Ge(Li) detectors is described. The program contains such features as automatic peak location and fitting, determination of peak energies and intensities, nuclide identification, and calculation of masses and errors. Finally, the results obtained with this computer code for a lunar sample are reported and briefly discussed

  2. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  3. Calculation of thermal neutron self-shielding correction factors for aqueous bulk sample prompt gamma neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Jalali, M.; Mohammadi, A.

    2007-01-01

    In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF 3 detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required

  4. Code Betal to calculation Alpha/Beta activities in environmental samples; Programa de ordenador Betal para el calculo de la actividad Beta/Alfa de muestras ambientales

    Energy Technology Data Exchange (ETDEWEB)

    Romero, L.; Travesi, A.

    1983-07-01

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs.

  5. Coding of DNA samples and data in the pharmaceutical industry: current practices and future directions--perspective of the I-PWG.

    Science.gov (United States)

    Franc, M A; Cohen, N; Warner, A W; Shaw, P M; Groenen, P; Snapir, A

    2011-04-01

    DNA samples collected in clinical trials and stored for future research are valuable to pharmaceutical drug development. Given the perceived higher risk associated with genetic research, industry has implemented complex coding methods for DNA. Following years of experience with these methods and with addressing questions from institutional review boards (IRBs), ethics committees (ECs) and health authorities, the industry has started reexamining the extent of the added value offered by these methods. With the goal of harmonization, the Industry Pharmacogenomics Working Group (I-PWG) conducted a survey to gain an understanding of company practices for DNA coding and to solicit opinions on their effectiveness at protecting privacy. The results of the survey and the limitations of the coding methods are described. The I-PWG recommends dialogue with key stakeholders regarding coding practices such that equal standards are applied to DNA and non-DNA samples. The I-PWG believes that industry standards for privacy protection should provide adequate safeguards for DNA and non-DNA samples/data and suggests a need for more universal standards for samples stored for future research.

  6. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 2: sample and verification problems. Computer code manual

    International Nuclear Information System (INIS)

    Hofmann, R.

    1982-08-01

    STEALTH sample and verification problems are presented to help users become familiar with STEALTH capabilities, input, and output. Problems are grouped into articles which are completely self-contained. The pagination in each article is A.n, where A is a unique alphabetic-character article identifier and n is a sequential page number which starts from 1 on the first page of text for each article. Articles concerning new capabilities will be added as they become available. STEALTH sample and verification calculations are divided into the following general categories: transient mechanical calculations dealing with solids; transient mechanical calculations dealing with fluids; transient thermal calculations dealing with solids; transient thermal calculations dealing with fluids; static and quasi-static calculations; and complex boundary interaction calculations

  7. Evaluation the total exposure of soil sample in Adaya site and the obtain risk assessments for the worker by Res Rad code program

    International Nuclear Information System (INIS)

    Mahadi, A. M.; Khadim, A. A. N.; Ibrahim, Z. H.; Ali, S. A.

    2012-12-01

    The present study aims to evaluation the total exposure to the worker in Adaya site risk assessment by using Res Rad code program. The study including 5 areas soil sample calculate in the site and analysis it by High Pure Germaniums (Hg) system made (CANBERRA) company. The soil sample simulation by (Res Rad) code program by inter the radioactive isotope concentration and the specification of the contamination zone area, depth and the cover depth of it. The total exposure of same sample was about 9 mSv/year and the (Heast 2001 Morbidity, FGR13 Morbidity) about 2.045 state every 100 worker in the year. There are simple different between Heast 2001 Morbidity and FGR13 Morbidity according to the Dose Conversion Factor (DCF) use it. The (FGR13 Morbidity) about 2.041 state every 100 worker in the year. (Author)

  8. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  9. Establishing and evaluating bar-code technology in blood sampling system: a model based on human centered human-centered design method.

    Science.gov (United States)

    Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen

    2012-01-01

    This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.

  10. Sampling procedures using optical-data and partial wave cross sections in a Monte Carlo code for simulating kilovolt electron and positron transport in solids

    International Nuclear Information System (INIS)

    Fernandez-Varea, J.M.; Salvat, F.; Liljequist, D.

    1994-09-01

    The details of a Monte Carlo code for computing the penetration and energy loss of electrons and positrons in solids are described. The code, intended for electrons and positrons with energies from ∼ 100 eV to ∼ 100 keV, is based on the simulation of individual elastic and inelastic collisions. Elastic collisions are simulated using differential cross sections computed by the relativistic partial wave method applied to a muffin-tin Dirac-Hartree-Fock-Slater potential. Inelastic collisions are simulated by means of a model based on optical and photoelectric data, which are extended to the non-zero momentum transfer region by means of somewhat different algorithms for valence electron excitations and inner-shell excitations. This report focuses on the description of detailed formulae and sampling methods. 10 refs, 3 figs, 8 tabs

  11. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  12. Air Emissions Sampling from Vacuum Thermal Desorption for Mixed Wastes Designated with a Combustion Treatment Code for the Energy Solutions LLC Mixed Waste Facility

    International Nuclear Information System (INIS)

    Christensen, M.E.; Willoughby, O.H.

    2009-01-01

    EnergySolutions LLC is permitted by the State of Utah to treat organically-contaminated Mixed Waste by a vacuum thermal desorption (VTD) treatment process at its Clive, Utah treatment, storage, and disposal facility. The VTD process separates organics from organically-contaminated waste by heating the material in an inert atmosphere, and captures them as concentrated liquid by condensation. The majority of the radioactive materials present in the feed to the VTD are retained with the treated solids; the recovered aqueous and organic condensates are not radioactive. This is generally true when the radioactivity is present in solid form such as inorganic salts, metals or metallic oxides. The exception is when volatile radioactive materials are present such as radon gas, tritium, or carbon-14 organic chemicals. Volatile radioactive materials are a small fraction of the feed material. On August 28, 2006, EnergySolutions submitted a request to the USEPA for a variance to the Land Disposal Restrictions (LDR) standards for wastes designated with the combustion treatment code (CMBST). The final rule granting a site specific treatment variance was effective June 13, 2008. This variance is an alternative treatment standard to treatment by CMBST required for these wastes under USEPA's rules. The State of Utah provides oversight of the VTD processing operations. A demonstration test for treating CMBST-coded wastes was performed on April 29, 2008 through May 1, 2008. Three separate process cycles were conducted during this test. Both solid/liquid samples and emission samples were collected each day during the demonstration test. To adequately challenge the unit, feed material was spiked with trichloroethylene, o-cresol, dibenzofuran, and coal tar. Emission testing was conducted by EnergySolutions' emissions test contractor and sampling for radioactivity within the off-gas was completed by EnergySolutions' Health Physics department. This report discusses the emission testing

  13. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  14. Cross-sectional association between ZIP code-level gentrification and homelessness among a large community-based sample of people who inject drugs in 19 US cities.

    Science.gov (United States)

    Linton, Sabriya L; Cooper, Hannah Lf; Kelley, Mary E; Karnes, Conny C; Ross, Zev; Wolfe, Mary E; Friedman, Samuel R; Jarlais, Don Des; Semaan, Salaam; Tempalski, Barbara; Sionean, Catlainn; DiNenno, Elizabeth; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2017-06-20

    Housing instability has been associated with poor health outcomes among people who inject drugs (PWID). This study investigates the associations of local-level housing and economic conditions with homelessness among a large sample of PWID, which is an underexplored topic to date. PWID in this cross-sectional study were recruited from 19 large cities in the USA as part of National HIV Behavioral Surveillance. PWID provided self-reported information on demographics, behaviours and life events. Homelessness was defined as residing on the street, in a shelter, in a single room occupancy hotel, or in a car or temporarily residing with friends or relatives any time in the past year. Data on county-level rental housing unaffordability and demand for assisted housing units, and ZIP code-level gentrification (eg, index of percent increases in non-Hispanic white residents, household income, gross rent from 1990 to 2009) and economic deprivation were collected from the US Census Bureau and Department of Housing and Urban Development. Multilevel models evaluated the associations of local economic and housing characteristics with homelessness. Sixty percent (5394/8992) of the participants reported homelessness in the past year. The multivariable model demonstrated that PWID living in ZIP codes with higher levels of gentrification had higher odds of homelessness in the past year (gentrification: adjusted OR=1.11, 95% CI=1.04 to 1.17). Additional research is needed to determine the mechanisms through which gentrification increases homelessness among PWID to develop appropriate community-level interventions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    Energy Technology Data Exchange (ETDEWEB)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Grimm, Simon L. [University of Bern, Space Research and Planetary Sciences, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Mordasini, Christoph; Oreshenko, Maria; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Bonnefoy, Mickaël [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Ehrenreich, David, E-mail: baptiste.lavie@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Observatoire de l’Université de Genève, 51 chemin des Maillettes, 1290, Sauverny (Switzerland)

    2017-09-01

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).

  16. Serial-data correlator/code translator

    Science.gov (United States)

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  17. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  18. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  19. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  20. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  1. Automated determination of the stable carbon isotopic composition (δ13C) of total dissolved inorganic carbon (DIC) and total nonpurgeable dissolved organic carbon (DOC) in aqueous samples: RSIL lab codes 1851 and 1852

    Science.gov (United States)

    Révész, Kinga M.; Doctor, Daniel H.

    2014-01-01

    The purposes of the Reston Stable Isotope Laboratory (RSIL) lab codes 1851 and 1852 are to determine the total carbon mass and the ratio of the stable isotopes of carbon (δ13C) for total dissolved inorganic carbon (DIC, lab code 1851) and total nonpurgeable dissolved organic carbon (DOC, lab code 1852) in aqueous samples. The analysis procedure is automated according to a method that utilizes a total carbon analyzer as a peripheral sample preparation device for analysis of carbon dioxide (CO2) gas by a continuous-flow isotope ratio mass spectrometer (CF-IRMS). The carbon analyzer produces CO2 and determines the carbon mass in parts per million (ppm) of DIC and DOC in each sample separately, and the CF-IRMS determines the carbon isotope ratio of the produced CO2. This configuration provides a fully automated analysis of total carbon mass and δ13C with no operator intervention, additional sample preparation, or other manual analysis. To determine the DIC, the carbon analyzer transfers a specified sample volume to a heated (70 °C) reaction vessel with a preprogrammed volume of 10% phosphoric acid (H3PO4), which allows the carbonate and bicarbonate species in the sample to dissociate to CO2. The CO2 from the reacted sample is subsequently purged with a flow of helium gas that sweeps the CO2 through an infrared CO2 detector and quantifies the CO2. The CO2 is then carried through a high-temperature (650 °C) scrubber reactor, a series of water traps, and ultimately to the inlet of the mass spectrometer. For the analysis of total dissolved organic carbon, the carbon analyzer performs a second step on the sample in the heated reaction vessel during which a preprogrammed volume of sodium persulfate (Na2S2O8) is added, and the hydroxyl radicals oxidize the organics to CO2. Samples containing 2 ppm to 30,000 ppm of carbon are analyzed. The precision of the carbon isotope analysis is within 0.3 per mill for DIC, and within 0.5 per mill for DOC.

  2. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  3. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  4. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  5. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  6. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  7. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  8. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  9. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  10. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  11. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  12. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  13. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  14. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  15. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  16. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  17. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  18. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  19. Running codes through the web

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    2001-01-01

    Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report

  20. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  1. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  2. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  3. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  4. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  6. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  7. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  8. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  9. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  10. Sensitive determination of thiols in wine samples by a stable isotope-coded derivatization reagent d0/d4-acridone-10-ethyl-N-maleimide coupled with high-performance liquid chromatography-electrospray ionization-tandem mass spectrometry analysis.

    Science.gov (United States)

    Lv, Zhengxian; You, Jinmao; Lu, Shuaimin; Sun, Weidi; Ji, Zhongyin; Sun, Zhiwei; Song, Cuihua; Chen, Guang; Li, Guoliang; Hu, Na; Zhou, Wu; Suo, Yourui

    2017-03-31

    As the key aroma compounds, varietal thiols are the crucial odorants responsible for the flavor of wines. Quantitative analysis of thiols can provide crucial information for the aroma profiles of different wine styles. In this study, a rapid and sensitive method for the simultaneous determination of six thiols in wine using d 0 /d 4 -acridone-10-ethyl-N-maleimide (d 0 /d 4 -AENM) as stable isotope-coded derivatization reagent (SICD) by high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) has been developed. Quantification of thiols was performed by using d 4 -AENM labeled thiols as the internal standards (IS), followed by stable isotope dilution HPLC-ESI-MS/MS analysis. The AENM derivatization combined with multiple reactions monitoring (MRM) not only allowed trace analysis of thiols due to the extremely high sensitivity, but also efficiently corrected the matrix effects during HPLC-MS/MS and the fluctuation in MS/MS signal intensity due to instrument. The obtained internal standard calibration curves for six thiols were linear over the range of 25-10,000pmol/L (R 2 ≥0.9961). Detection limits (LODs) for most of analytes were below 6.3pmol/L. The proposed method was successfully applied for the simultaneous determination of six kinds of thiols in wine samples with precisions ≤3.5% and recoveries ≥78.1%. In conclusion, the developed method is expected to be a promising tool for detection of trace thiols in wine and also in other complex matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  12. UNSPEC: revisited (semaphore code)

    International Nuclear Information System (INIS)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  13. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  14. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  15. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  16. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  17. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  18. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  19. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  20. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  1. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  2. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  3. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  4. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  5. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  6. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  7. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  8. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  9. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  10. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  11. DNA Barcoding through Quaternary LDPC Codes.

    Science.gov (United States)

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  12. DNA Barcoding through Quaternary LDPC Codes.

    Directory of Open Access Journals (Sweden)

    Elizabeth Tapia

    Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  15. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  16. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  17. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  18. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  19. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  20. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  1. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  2. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  3. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  4. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  5. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  6. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  7. Coded aperture tomography revisited

    International Nuclear Information System (INIS)

    Bizais, Y.; Rowe, R.W.; Zubal, I.G.; Bennett, G.W.; Brill, A.B.

    1983-01-01

    Coded aperture (CA) Tomography never achieved wide spread use in Nuclear Medicine, except for the degenerate case of Seven Pinhole tomagraphy (7PHT). However it enjoys several attractive features (high sensitivity and tomographic ability with a statis detector). On the other hand, resolution is usually poor especially along the depth axis and the reconstructed volume is rather limited. Arguments are presented justifying the position that CA tomography can be useful for imaging time-varying 3D structures, if its major drawbacks (poor longitudinal resolution and difficulty in quantification) are overcome. Poor results obtained with 7PHT can be explained by both a very limited angular range sampled and a crude modelling of the image formation process. Therefore improvements can be expected by the use of a dual-detector system, along with a better understanding of its sampling properties and the use of more powerful reconstruction algorithms. Non overlapping multipinhole plates, because they do not involve a decoding procedure, should be considered first for practical applications. Use of real CA should be considered for cases in which non overlapping multipinhole plates do not lead to satisfactory solutions. We have been and currently are carrying out theoretical and experimental works, in order to define the factors which limit CA imaging and to propose satisfactory solutions for Dynamic Emission Tomography

  8. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  9. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  10. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  11. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  12. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  13. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  14. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  15. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  16. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  17. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  18. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  19. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  20. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  2. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  3. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  4. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  5. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Czech Academy of Sciences Publication Activity Database

    Mukhopadhyay, N. D.; Sampson, A. J.; Deniz, D.; Carlsson, G. A.; Williamson, J.; Malušek, Alexandr

    2012-01-01

    Roč. 70, č. 1 (2012), s. 315-323 ISSN 0969-8043 Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo * correlated sampling * efficiency * uncertainty * bootstrap Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.179, year: 2012 http://www.sciencedirect.com/science/article/pii/S0969804311004775

  6. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  7. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  8. "Hour of Code": Can It Change Students' Attitudes toward Programming?

    Science.gov (United States)

    Du, Jie; Wimmer, Hayden; Rada, Roy

    2016-01-01

    The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…

  9. A description of the two-dimensional combustion code FLARE

    International Nuclear Information System (INIS)

    Martin, D.

    1986-07-01

    This report gives details of the computer code FLARE. The model used for the turbulent combustion of premixed gases is described. Details of the numerical scheme used to solve the resulting equations are discussed. The input and output for the code are also described. Details of the coding are given in the Appendices together with sample input and output. (author)

  10. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  12. Code of Ethics

    Science.gov (United States)

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  13. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  14. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  15. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  16. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  17. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  18. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  19. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  20. Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code

    International Nuclear Information System (INIS)

    Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.

    1978-12-01

    A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A

  1. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  2. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  3. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  4. Microgravity computing codes. User's guide

    Science.gov (United States)

    1982-01-01

    Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.

  5. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  6. Applications guide to the RSIC-distributed version of the MCNP code (coupled Monte Carlo neutron-photon Code)

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1985-09-01

    An overview of the RSIC-distributed version of the MCNP code (a soupled Monte Carlo neutron-photon code) is presented. All general features of the code, from machine hardware requirements to theoretical details, are discussed. The current nuclide cross-section and other libraries available in the standard code package are specified, and a realistic example of the flexible geometry input is given. Standard and nonstandard source, estimator, and variance-reduction procedures are outlined. Examples of correct usage and possible misuse of certain code features are presented graphically and in standard output listings. Finally, itemized summaries of sample problems, various MCNP code documentation, and future work are given

  7. Ethical Code Effectiveness in Football Clubs: A Longitudinal Analysis

    OpenAIRE

    Constandt, Bram; De Waegeneer, Els; Willem, Annick

    2017-01-01

    As football (soccer) clubs are facing different ethical challenges, many clubs are turning to ethical codes to counteract unethical behaviour. However, both in- and outside the sport field, uncertainty remains about the effectiveness of these ethical codes. For the first time, a longitudinal study design was adopted to evaluate code effectiveness. Specifically, a sample of non-professional football clubs formed the subject of our inquiry. Ethical code effectiveness was...

  8. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  9. Development of HTGR plant dynamics simulation code

    International Nuclear Information System (INIS)

    Ohashi, Kazutaka; Tazawa, Yujiro; Mitake, Susumu; Suzuki, Katsuo.

    1987-01-01

    Plant dynamics simulation analysis plays an important role in the design work of nuclear power plant especially in the plant safety analysis, control system analysis, and transient condition analysis. The authors have developed the plant dynamics simulation code named VESPER, which is applicable to the design work of High Temperature Engineering Test Reactor, and have been improving the code corresponding to the design changes made in the subsequent design works. This paper describes the outline of VESPER code and shows its sample calculation results selected from the recent design work. (author)

  10. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  11. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  12. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  13. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  14. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  15. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  16. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  17. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  18. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  19. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  20. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  1. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  2. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  3. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  4. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  5. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  6. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  7. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  8. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  9. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  10. Cracking the Gender Codes

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  11. The computer code SEURBNUK-2

    International Nuclear Information System (INIS)

    Yerkess, A.

    1984-01-01

    SEURBNUK-2 has been designed to model the hydrodynamic development in time of a hypothetical core disrupture accident in a fast breeder reactor. SEURBNUK-2 is a two-dimensional, axisymmetric, eulerian, finite difference containment code. The numerical procedure adopted in SEURBNUK to solve the hydrodynamic equations is based on the semi-implicit ICE method. SEURBNUK has a full thin shell treatment for tanks of arbitrary shape and includes the effects of the compressibility of the fluid. Fluid flow through porous media and porous structures can also be accommodated. An important feature of SEURBNUK is that the thin shell equations are solved quite separately from those of the fluid, and the time step for the fluid flow calculation can be an integer multiple of that for calculating the shell motion. The interaction of the shell with the fluid is then considered as a modification to the coefficients in the implicit pressure equations, the modifications naturally depending on the behaviour of the thin shell section within the fluid cell. The code is limited to dealing with a single fluid, the coolant, whereas the bubble and the cover gas are treated as cavities of uniform pressure calculated via appropriate pressure-volume-energy relationships. This manual describes the input data specifications needed for the execution of SEURBNUK-2 calculations and nine sample problems of varying degrees of complexity highlight the code capabilities. After explaining the output facilities information is included to aid those unfamiliar with SEURBNUK-2 to avoid the common pit-falls experienced by novices

  12. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  13. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  14. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  15. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  16. Temporal Coding of Volumetric Imagery

    Science.gov (United States)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  17. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  18. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  19. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  20. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T.; Rollstin, J.A.; Chanin, D.I.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs

  1. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Rollstin, J.A.; Chanin, D.I.; Jow, H.N.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management

  2. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  3. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  4. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  5. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  6. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  7. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  8. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  9. Preliminary Coupling of MATRA Code for Multi-physics Analysis

    International Nuclear Information System (INIS)

    Kim, Seongjin; Choi, Jinyoung; Yang, Yongsik; Kwon, Hyouk; Hwang, Daehyun

    2014-01-01

    The boundary conditions such as the inlet temperature, mass flux, averaged heat flux, power distributions of the rods, and core geometry is given by constant values or functions of time. These conditions are separately calculated and provided by other codes, such as a neutronics or a system codes, into the MATRA code. In addition, the coupling of several codes in the different physics field is focused and embodied. In this study, multiphysics coupling methods were developed for a subchannel code (MATRA) with neutronics codes (MASTER, DeCART) and a fuel performance code (FRAPCON-3). Preliminary evaluation results for representative sample cases are presented. The MASTER and DeCART codes provide the power distribution of the rods in the core to the MATRA code. In case of the FRAPCON-3 code, the variation of the rod diameter induced by the thermal expansion is yielded and provided. The MATRA code transfers the thermal-hydraulic conditions that each code needs. Moreover, the coupling method with each code is described

  10. Operational reactor physics analysis codes (ORPAC)

    International Nuclear Information System (INIS)

    Kumar, Jainendra; Singh, K.P.; Singh, Kanchhi

    2007-07-01

    For efficient, smooth and safe operation of a nuclear research reactor, many reactor physics evaluations are regularly required. As part of reactor core management the important activities are maintaining core reactivity status, core power distribution, xenon estimations, safety evaluation of in-pile irradiation samples and experimental assemblies and assessment of nuclear safety in fuel handling/storage. In-pile irradiation of samples requires a prior estimation of the reactivity load due to the sample, the heating rate and the activity developed in it during irradiation. For the safety of personnel handling irradiated samples the dose rate at the surface of shielded flask housing the irradiated sample should be less than 200 mR/Hr.Therefore, a proper shielding and radioactive cooling of the irradiated sample are required to meet the said requirement. Knowledge of xenon load variation with time (Startup-curve) helps in estimating Xenon override time. Monitoring of power in individual fuel channels during reactor operation is essential to know any abnormal power distribution to avoid unsafe situations. Complexities in the estimation of above mentioned reactor parameters and their frequent requirement compel one to use computer codes to avoid possible human errors. For efficient and quick evaluation of parameters related to reactor operations such as xenon load, critical moderator height and nuclear heating and reactivity load of isotope samples/experimental assembly, a computer code ORPAC (Operational Reactor Physics Analysis Codes) has been developed. This code is being used for regular assessment of reactor physics parameters in Dhruva and Cirus. The code ORPAC written in Visual Basic 6.0 environment incorporates several important operational reactor physics aspects on a single platform with graphical user interfaces (GUI) to make it more user-friendly and presentable. (author)

  11. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  12. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  13. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  14. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  15. The Coding Question.

    Science.gov (United States)

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  17. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    , a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.

  18. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  19. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  20. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  1. Code of Medical Ethics

    Directory of Open Access Journals (Sweden)

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  2. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  3. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  4. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  5. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  6. Dual Coding in Children.

    Science.gov (United States)

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  7. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  8. Radioactive action code

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  9. Building Codes and Regulations.

    Science.gov (United States)

    Fisher, John L.

    The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)

  10. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  11. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  12. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  13. CERN Code of Conduct

    CERN Document Server

    Department, HR

    2010-01-01

    The Code is intended as a guide in helping us, as CERN contributors, to understand how to conduct ourselves, treat others and expect to be treated. It is based around the five core values of the Organization. We should all become familiar with it and try to incorporate it into our daily life at CERN.

  14. Nuclear safety code study

    Energy Technology Data Exchange (ETDEWEB)

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  15. Revised C++ coding conventions

    CERN Document Server

    Callot, O

    2001-01-01

    This document replaces the note LHCb 98-049 by Pavel Binko. After a few years of practice, some simplification and clarification of the rules was needed. As many more people have now some experience in writing C++ code, their opinion was also taken into account to get a commonly agreed set of conventions

  16. Corporate governance through codes

    NARCIS (Netherlands)

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  17. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  18. Broadcast Coded Slotted ALOHA

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives ...

  19. Software Defined Coded Networking

    DEFF Research Database (Denmark)

    Di Paola, Carla; Roetter, Daniel Enrique Lucani; Palazzo, Sergio

    2017-01-01

    the quality of each link and even across neighbouring links and using simulations to show that an additional reduction of packet transmission in the order of 40% is possible. Second, to advocate for the use of network coding (NC) jointly with software defined networking (SDN) providing an implementation...

  20. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  1. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  2. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    having a probability Pi of being equal to a 1. Let us assume ... equal to a 0/1 has no bearing on the probability of the. It is often ... bits (call this set S) whose individual bits add up to zero ... In the context of binary error-correct~ng codes, specifi-.

  3. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  4. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  5. Stochastic geometry in PRIZMA code

    International Nuclear Information System (INIS)

    Malyshkin, G. N.; Kashaeva, E. A.; Mukhamadiev, R. F.

    2007-01-01

    The paper describes a method used to simulate radiation transport through random media - randomly placed grains in a matrix material. The method models the medium consequently from one grain crossed by particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes. Sort and size distributions of crossed grains were obtained and an algorithm was developed for sampling grain orientations and positions. Special consideration was given to medium modeling at the boundary of the stochastic region. The method was implemented in the universal 3D Monte Carlo code PRIZMA. The paper provides calculated results for a model problem where we determine volume fractions of modeled components crossed by particle trajectories. It also demonstrates the use of biased sampling techniques implemented in PRIZMA for solving a problem of deep penetration in model random media. Described are calculations for the spectral response of a capacitor dose detector whose anode was modeled with account for its stochastic structure. (authors)

  6. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  7. Coding for urologic office procedures.

    Science.gov (United States)

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Balanced sampling

    NARCIS (Netherlands)

    Brus, D.J.

    2015-01-01

    In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling

  9. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  10. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  11. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  12. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  13. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1983-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  14. The SEDA computer code and its utilization for Angra 1

    International Nuclear Information System (INIS)

    Fernandes Filho, T.L.

    1988-11-01

    The implementation of SEDA 2.0 computer code, developed at Ezeiza Atomic Center, Argentine for Angra 1 reactor is described. The SEDA code gives an estimate for radiological consequences of nuclear accidents with release of radiactive materials for the environment. This code is now available for an IBM PC-XT. The computer environment, the files used, data, the programining structure and the models used are presented. The input data and results for two sample case are described. (author) [pt

  15. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1982-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  16. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  17. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  18. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  19. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  20. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  1. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  2. Tokamak simulation code manual

    International Nuclear Information System (INIS)

    Chung, Moon Kyoo; Oh, Byung Hoon; Hong, Bong Keun; Lee, Kwang Won

    1995-01-01

    The method to use TSC (Tokamak Simulation Code) developed by Princeton plasma physics laboratory is illustrated. In KT-2 tokamak, time dependent simulation of axisymmetric toroidal plasma and vertical stability have to be taken into account in design phase using TSC. In this report physical modelling of TSC are described and examples of application in JAERI and SERI are illustrated, which will be useful when TSC is installed KAERI computer system. (Author) 15 refs., 6 figs., 3 tabs

  3. Status of MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  4. Codes of Good Governance

    DEFF Research Database (Denmark)

    Beck Jørgensen, Torben; Sørensen, Ditte-Lene

    2013-01-01

    Good governance is a broad concept used by many international organizations to spell out how states or countries should be governed. Definitions vary, but there is a clear core of common public values, such as transparency, accountability, effectiveness, and the rule of law. It is quite likely......, transparency, neutrality, impartiality, effectiveness, accountability, and legality. The normative context of public administration, as expressed in codes, seems to ignore the New Public Management and Reinventing Government reform movements....

  5. RF cavity evaluation with the code SUPERFISH

    International Nuclear Information System (INIS)

    Hori, T.; Nakanishi, T.; Ueda, N.

    1982-01-01

    The computer code SUPERFISH calculates axisymmetric rf fields and is most applicable to re-entrant cavities of an Alvarez linac. Some sample results are shown for the first Alvarez's in NUMATRON project. On the other hand the code can also be effectivily applied to TE modes excited in an RFQ linac when the cavity is approximately considered as positioning at an infinite distance from the symmetry axis. The evaluation was made for several RFQ cavities, models I, II and a test linac named LITL, and useful results for the resonator design were obtained. (author)

  6. Testing efficiency transfer codes for equivalence

    International Nuclear Information System (INIS)

    Vidmar, T.; Celik, N.; Cornejo Diaz, N.; Dlabac, A.; Ewa, I.O.B.; Carrazana Gonzalez, J.A.; Hult, M.; Jovanovic, S.; Lepy, M.-C.; Mihaljevic, N.; Sima, O.; Tzika, F.; Jurado Vargas, M.; Vasilopoulou, T.; Vidmar, G.

    2010-01-01

    Four general Monte Carlo codes (GEANT3, PENELOPE, MCNP and EGS4) and five dedicated packages for efficiency determination in gamma-ray spectrometry (ANGLE, DETEFF, GESPECOR, ETNA and EFFTRAN) were checked for equivalence by applying them to the calculation of efficiency transfer (ET) factors for a set of well-defined sample parameters, detector parameters and energies typically encountered in environmental radioactivity measurements. The differences between the results of the different codes never exceeded a few percent and were lower than 2% in the majority of cases.

  7. Orthopedics coding and funding.

    Science.gov (United States)

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  8. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  9. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  10. MELCOR computer code manuals

    International Nuclear Information System (INIS)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR's phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package

  11. Laser sampling

    International Nuclear Information System (INIS)

    Gorbatenko, A A; Revina, E I

    2015-01-01

    The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references

  12. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  13. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  14. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  15. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  16. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  17. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  18. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  19. The impact of international codes of conduct on employment ...

    African Journals Online (AJOL)

    The study examined how international codes of conduct address employment conditions and gender issues in the Chinese flower industry. A sample of 20 companies was purposively selected and 200 workers from these companies were interviewed. The adoption of international codes did not improve workers conditions ...

  20. Multiple sample, radioactive particle counting apparatus

    International Nuclear Information System (INIS)

    Reddy, R.R.V.; Kelso, D.M.

    1978-01-01

    An apparatus is described for determining the respective radioactive particle sample count being emitted from radioactive particle containing samples. It includes means for modulating the information on the radioactive particles being emitted from the samples, coded detecting means for sequentially detecting different respective coded combinations of the radioactive particles emitted from more than one but less than all of the samples, and processing the modulated information to derive the sample count for each sample. It includes a single light emitting crystal next to a number of samples, an encoder belt sequentially movable between the crystal and the samples. The encoder belt has a coded array of apertures to provide corresponding modulated light pulses from the crystal, and a photomultiplier tube to convert the modulated light pulses to decodable electrical signals for deriving the respective sample count

  1. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  2. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  3. Improvement of group collapsing in TRANSX code

    International Nuclear Information System (INIS)

    Jeong, Hyun Tae; Kim, Young Cheol; Kim, Young In; Kim, Young Kyun

    1996-07-01

    A cross section generating and processing computer code TRANSX version 2.15 in the K-CORE system, being developed by the KAERI LMR core design technology development team produces various cross section input files appropriated for flux calculation options from the cross section library MATXS. In this report, a group collapsing function of TRANSX has been improved to utilize the zone averaged flux file RZFLUX written in double precision as flux weighting functions. As a result, an iterative calculation system using double precision RZFLUX consisting of the cross section data library file MATXS, the effective cross section producing and processing code TRANSX, and the transport theory calculation code TWODANT has been set up and verified through a sample model calculation. 4 refs. (Author)

  4. Documentation and verification of the SHAFT code

    International Nuclear Information System (INIS)

    St John, C.M.

    1991-12-01

    The SHAFT code incorporates equations to compute stresses in a shaft liner when the rock through which a shaft passes is subject to known three-dimensional states of stress or strain. The deformation modes considered are hoop deformation, axial deformation, and shear on a plane normal to the shaft axis. Interaction between the liner and the soil and rock is considered, and it is assumed that the liner is in place before loading is applied. This code is intended to be used interactively but creates a permanent record complete with necessary quality assurance information. The code has been carefully verified for the case of generalized plane strain, in which an arbitrary axial strain can be defined. It may also be used for plane stress analysis. Output is given in the form of stresses at selected sample points in the linear and the rock and a simple graphical representation of the distribution of stress through the liner. 12 figs., 13 tabs

  5. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  6. High Energy Transport Code HETC

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1985-09-01

    The physics contained in the High Energy Transport Code (HETC), in particular the collision models, are discussed. An application using HETC as part of the CALOR code system is also given. 19 refs., 5 figs., 3 tabs

  7. Code stroke in Asturias.

    Science.gov (United States)

    Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S

    2016-04-01

    Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  8. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  9. WWER reactor physics code applications

    International Nuclear Information System (INIS)

    Gado, J.; Kereszturi, A.; Gacs, A.; Telbisz, M.

    1994-01-01

    The coupled steady-state reactor physics and thermohydraulic code system KARATE has been developed and applied for WWER-1000 and WWER-440 operational calculations. The 3 D coupled kinetic code KIKO3D has been developed and validated for WWER-440 accident analysis applications. The coupled kinetic code SMARTA developed by VTT Helsinki has been applied for WWER-440 accident analysis. The paper gives a summary of the experience in code development and application. (authors). 10 refs., 2 tabs., 5 figs

  10. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  11. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  12. Bar codes for nuclear safeguards

    International Nuclear Information System (INIS)

    Keswani, A.N.; Bieber, A.M. Jr.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  13. Bar codes for nuclear safeguards

    International Nuclear Information System (INIS)

    Keswani, A.N.; Bieber, A.M.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially-available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  14. Quick response codes in Orthodontics

    Directory of Open Access Journals (Sweden)

    Moidin Shakil

    2015-01-01

    Full Text Available Quick response (QR code codes are two-dimensional barcodes, which encodes for a large amount of information. QR codes in Orthodontics are an innovative approach in which patient details, radiographic interpretation, and treatment plan can be encoded. Implementing QR code in Orthodontics will save time, reduces paperwork, and minimizes manual efforts in storage and retrieval of patient information during subsequent stages of treatment.

  15. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  16. Cinder begin creative coding

    CERN Document Server

    Rijnieks, Krisjanis

    2013-01-01

    Presented in an easy to follow, tutorial-style format, this book will lead you step-by-step through the multi-faceted uses of Cinder.""Cinder: Begin Creative Coding"" is for people who already have experience in programming. It can serve as a transition from a previous background in Processing, Java in general, JavaScript, openFrameworks, C++ in general or ActionScript to the framework covered in this book, namely Cinder. If you like quick and easy to follow tutorials that will let yousee progress in less than an hour - this book is for you. If you are searching for a book that will explain al

  17. The FLIC conversion codes

    International Nuclear Information System (INIS)

    Basher, J.C.

    1965-05-01

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  18. SPRAY code user's report

    International Nuclear Information System (INIS)

    Shire, P.R.

    1977-03-01

    The SPRAY computer code has been developed to model the effects of postulated sodium spray release from LMFBR piping within containment chambers. The calculation method utilizes gas convection, heat transfer and droplet combustion theory to calculate the pressure and temperature effects within the enclosure. The applicable range is 0-21 mol percent oxygen and .02-.30 inch droplets with or without humidity. Droplet motion and large sodium surface area combine to produce rapid heat release and pressure rise within the enclosed volume

  19. The FLIC conversion codes

    Energy Technology Data Exchange (ETDEWEB)

    Basher, J C [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1965-05-15

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  20. Code Generation with Templates

    CERN Document Server

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  1. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  2. Order functions and evaluation codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....

  3. Direct-semidirect (DSD) codes

    International Nuclear Information System (INIS)

    Cvelbar, F.

    1999-01-01

    Recent codes for direct-semidirect (DSD) model calculations in the form of answers to a detailed questionnaire are reviewed. These codes include those embodying the classical DSD approach covering only the transitions to the bound states (RAF, HIKARI, and those of the Bologna group), as well as the code CUPIDO++ that also treats transitions to unbound states. (author)

  4. Dual Coding, Reasoning and Fallacies.

    Science.gov (United States)

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  5. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  6. Lattice polytopes in coding theory

    Directory of Open Access Journals (Sweden)

    Ivan Soprunov

    2015-05-01

    Full Text Available In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.

  7. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  8. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Chanin, D.I.; Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems

  9. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  10. Language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik

    1998-01-01

    This article has two aims: [1] to present a revised version of the sampling method that was originally proposed in 1993 by Rijkhoff, Bakker, Hengeveld and Kahrel, and [2] to discuss a number of other approaches to language sampling in the light of our own method. We will also demonstrate how our...... sampling method is used with different genetic classifications (Voegelin & Voegelin 1977, Ruhlen 1987, Grimes ed. 1997) and argue that —on the whole— our sampling technique compares favourably with other methods, especially in the case of exploratory research....

  11. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  12. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  13. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  14. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  15. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  16. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  17. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  18. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  19. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  20. Assessment of the computer code COBRA/CFTL

    International Nuclear Information System (INIS)

    Baxi, C.B.; Burhop, C.J.

    1981-07-01

    The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices

  1. Uncertainty and sensitivity analysis using probabilistic system assessment code. 1

    International Nuclear Information System (INIS)

    Honma, Toshimitsu; Sasahara, Takashi.

    1993-10-01

    This report presents the results obtained when applying the probabilistic system assessment code under development to the PSACOIN Level 0 intercomparison exercise organized by the Probabilistic System Assessment Code User Group in the Nuclear Energy Agency (NEA) of OECD. This exercise is one of a series designed to compare and verify probabilistic codes in the performance assessment of geological radioactive waste disposal facilities. The computations were performed using the Monte Carlo sampling code PREP and post-processor code USAMO. The submodels in the waste disposal system were described and coded with the specification of the exercise. Besides the results required for the exercise, further additional uncertainty and sensitivity analyses were performed and the details of these are also included. (author)

  2. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  3. Sub-Transport Layer Coding

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2014-01-01

    Packet losses in wireless networks dramatically curbs the performance of TCP. This paper introduces a simple coding shim that aids IP-layer traffic in lossy environments while being transparent to transport layer protocols. The proposed coding approach enables erasure correction while being...... oblivious to the congestion control algorithms of the utilised transport layer protocol. Although our coding shim is indifferent towards the transport layer protocol, we focus on the performance of TCP when ran on top of our proposed coding mechanism due to its widespread use. The coding shim provides gains...

  4. Environmental sampling

    International Nuclear Information System (INIS)

    Puckett, J.M.

    1998-01-01

    Environmental Sampling (ES) is a technology option that can have application in transparency in nuclear nonproliferation. The basic process is to take a sample from the environment, e.g., soil, water, vegetation, or dust and debris from a surface, and through very careful sample preparation and analysis, determine the types, elemental concentration, and isotopic composition of actinides in the sample. The sample is prepared and the analysis performed in a clean chemistry laboratory (CCL). This ES capability is part of the IAEA Strengthened Safeguards System. Such a Laboratory is planned to be built by JAERI at Tokai and will give Japan an intrinsic ES capability. This paper presents options for the use of ES as a transparency measure for nuclear nonproliferation

  5. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hatten, Mike [Solarc Energy Group, LLC, Seattle, WA (United States); Jones, Dennis [Group 14 Engineering, Inc., Denver, CO (United States); Cooper, Matthew [Group 14 Engineering, Inc., Denver, CO (United States)

    2017-03-24

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research is to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.

  6. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  7. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  8. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  9. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  10. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corporation, Tokyo (Japan)

    2013-10-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  11. Input/output manual of light water reactor fuel analysis code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2013-10-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed, as an extended version from the former version FEMAXI-6, for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which are fully disclosed in the code model description published in the form of another JAEA-Data/Code report. The present manual, which is the very counterpart of this description document, gives detailed explanations of files and operation method of FEMAXI-7 code and its related codes, methods of input/output, sample Input/Output, methods of source code modification, subroutine structure, and internal variables in a specific manner in order to facilitate users to perform fuel analysis by FEMAXI-7. (author)

  12. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  13. Physical Layer Network Coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Yomo, Hironori; Popovski, Petar

    2013-01-01

    of interfering nodes and usage of spatial reservation mechanisms. Specifically, we introduce a reserved area in order to protect the nodes involved in two-way relaying from the interference caused by neighboring nodes. We analytically derive the end-to-end rate achieved by PLNC considering the impact......Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause....../receive interference. The way to deal with this problem in distributed wireless networks is usage of MAC-layer mechanisms that make a spatial reservation of the shared wireless medium, similar to the well-known RTS/CTS in IEEE 802.11 wireless networks. In this paper, we investigate two-way relaying in presence...

  14. Concatenated quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.

    1996-07-01

    One main problem for the future of practial quantum computing is to stabilize the computation against unwanted interactions with the environment and imperfections in the applied operations. Existing proposals for quantum memories and quantum channels require gates with asymptotically zero error to store or transmit an input quantum state for arbitrarily long times or distances with fixed error. This report gives a method which has the property that to store or transmit a qubit with maximum error {epsilon} requires gates with errors at most {ital c}{epsilon} and storage or channel elements with error at most {epsilon}, independent of how long we wish to store the state or how far we wish to transmit it. The method relies on using concatenated quantum codes and hierarchically implemented recovery operations. The overhead of the method is polynomial in the time of storage or the distance of the transmission. Rigorous and heuristic lower bounds for the constant {ital c} are given.

  15. Code des baux 2018

    CERN Document Server

    Vial-Pedroletti, Béatrice; Kendérian, Fabien; Chavance, Emmanuelle; Coutan-Lapalus, Christelle

    2017-01-01

    Le code des baux 2018 vous offre un contenu extrêmement pratique, fiable et à jour au 1er août 2017. Cette 16e édition intègre notamment : le décret du 27 juillet 2017 relatif à l’évolution de certains loyers dans le cadre d’une nouvelle location ou d’un renouvellement de bail, pris en application de l’article 18 de la loi n° 89-462 du 6 juillet 1989 ; la loi du 27 janvier 2017 relative à l’égalité et à la citoyenneté ; la loi du 9 décembre 2016 relative à la transparence, à la lutte contre la corruption et à la modernisation de la vie économique ; la loi du 18 novembre 2016 de modernisation de la justice du xxie siècle

  16. GOC: General Orbit Code

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-08-01

    GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run

  17. New tools to analyze overlapping coding regions.

    Science.gov (United States)

    Bayegan, Amir H; Garcia-Martin, Juan Antonio; Clote, Peter

    2016-12-13

    Retroviruses transcribe messenger RNA for the overlapping Gag and Gag-Pol polyproteins, by using a programmed -1 ribosomal frameshift which requires a slippery sequence and an immediate downstream stem-loop secondary structure, together called frameshift stimulating signal (FSS). It follows that the molecular evolution of this genomic region of HIV-1 is highly constrained, since the retroviral genome must contain a slippery sequence (sequence constraint), code appropriate peptides in reading frames 0 and 1 (coding requirements), and form a thermodynamically stable stem-loop secondary structure (structure requirement). We describe a unique computational tool, RNAsampleCDS, designed to compute the number of RNA sequences that code two (or more) peptides p,q in overlapping reading frames, that are identical (or have BLOSUM/PAM similarity that exceeds a user-specified value) to the input peptides p,q. RNAsampleCDS then samples a user-specified number of messenger RNAs that code such peptides; alternatively, RNAsampleCDS can exactly compute the position-specific scoring matrix and codon usage bias for all such RNA sequences. Our software allows the user to stipulate overlapping coding requirements for all 6 possible reading frames simultaneously, even allowing IUPAC constraints on RNA sequences and fixing GC-content. We generalize the notion of codon preference index (CPI) to overlapping reading frames, and use RNAsampleCDS to generate control sequences required in the computation of CPI. Moreover, by applying RNAsampleCDS, we are able to quantify the extent to which the overlapping coding requirement in HIV-1 [resp. HCV] contribute to the formation of the stem-loop [resp. double stem-loop] secondary structure known as the frameshift stimulating signal. Using our software, we confirm that certain experimentally determined deleterious HCV mutations occur in positions for which our software RNAsampleCDS and RNAiFold both indicate a single possible nucleotide. We

  18. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  19. ClinicalCodes: an online clinical codes repository to improve the validity and reproducibility of research using electronic medical records.

    Science.gov (United States)

    Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.

  20. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  1. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  2. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  3. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  4. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...

  5. A computerized energy systems code and information library at Soreq

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, I; Shapira, M; Caner, D; Sapier, D [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center

    1996-12-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors).

  6. A computerized energy systems code and information library at Soreq

    International Nuclear Information System (INIS)

    Silverman, I.; Shapira, M.; Caner, D.; Sapier, D.

    1996-01-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors)

  7. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  8. SALE: Safeguards Analytical Laboratory Evaluation computer code

    International Nuclear Information System (INIS)

    Carroll, D.J.; Bush, W.J.; Dolan, C.A.

    1976-09-01

    The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem

  9. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  10. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  11. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  12. Numerical Tokamak Project code comparison

    International Nuclear Information System (INIS)

    Waltz, R.E.; Cohen, B.I.; Beer, M.A.

    1994-01-01

    The Numerical Tokamak Project undertook a code comparison using a set of TFTR tokamak parameters. Local radial annulus codes of both gyrokinetic and gyrofluid types were compared for both slab and toroidal case limits assuming ion temperature gradient mode turbulence in a pure plasma with adiabatic electrons. The heat diffusivities were found to be in good internal agreement within ± 50% of the group average over five codes

  13. Ethical codes in business practice

    OpenAIRE

    Kobrlová, Marie

    2013-01-01

    The diploma thesis discusses the issues of ethics and codes of ethics in business. The theoretical part defines basic concepts of ethics, presents its historical development and the methods and tools of business ethics. It also focuses on ethical codes and the area of law and ethics. The practical part consists of a quantitative survey, which provides views of selected business entities of business ethics and the use of codes of ethics in practice.

  14. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  15. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  16. Programming Entity Framework Code First

    CERN Document Server

    Lerman, Julia

    2011-01-01

    Take advantage of the Code First data modeling approach in ADO.NET Entity Framework, and learn how to build and configure a model based on existing classes in your business domain. With this concise book, you'll work hands-on with examples to learn how Code First can create an in-memory model and database by default, and how you can exert more control over the model through further configuration. Code First provides an alternative to the database first and model first approaches to the Entity Data Model. Learn the benefits of defining your model with code, whether you're working with an exis

  17. User manual of UNF code

    International Nuclear Information System (INIS)

    Zhang Jingshang

    2001-01-01

    The UNF code (2001 version) written in FORTRAN-90 is developed for calculating fast neutron reaction data of structure materials with incident energies from about 1 Kev up to 20 Mev. The code consists of the spherical optical model, the unified Hauser-Feshbach and exciton model. The man nal of the UNF code is available for users. The format of the input parameter files and the output files, as well as the functions of flag used in UNF code, are introduced in detail, and the examples of the format of input parameters files are given

  18. Audit of accuracy of clinical coding in oral surgery.

    Science.gov (United States)

    Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R

    2014-10-01

    We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.

  19. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  20. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  1. Mobile code security

    Science.gov (United States)

    Ramalingam, Srikumar

    2001-11-01

    A highly secure mobile agent system is very important for a mobile computing environment. The security issues in mobile agent system comprise protecting mobile hosts from malicious agents, protecting agents from other malicious agents, protecting hosts from other malicious hosts and protecting agents from malicious hosts. Using traditional security mechanisms the first three security problems can be solved. Apart from using trusted hardware, very few approaches exist to protect mobile code from malicious hosts. Some of the approaches to solve this problem are the use of trusted computing, computing with encrypted function, steganography, cryptographic traces, Seal Calculas, etc. This paper focuses on the simulation of some of these existing techniques in the designed mobile language. Some new approaches to solve malicious network problem and agent tampering problem are developed using public key encryption system and steganographic concepts. The approaches are based on encrypting and hiding the partial solutions of the mobile agents. The partial results are stored and the address of the storage is destroyed as the agent moves from one host to another host. This allows only the originator to make use of the partial results. Through these approaches some of the existing problems are solved.

  2. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  3. Coding, cryptography and combinatorics

    CERN Document Server

    Niederreiter, Harald; Xing, Chaoping

    2004-01-01

    It has long been recognized that there are fascinating connections between cod­ ing theory, cryptology, and combinatorics. Therefore it seemed desirable to us to organize a conference that brings together experts from these three areas for a fruitful exchange of ideas. We decided on a venue in the Huang Shan (Yellow Mountain) region, one of the most scenic areas of China, so as to provide the additional inducement of an attractive location. The conference was planned for June 2003 with the official title Workshop on Coding, Cryptography and Combi­ natorics (CCC 2003). Those who are familiar with events in East Asia in the first half of 2003 can guess what happened in the end, namely the conference had to be cancelled in the interest of the health of the participants. The SARS epidemic posed too serious a threat. At the time of the cancellation, the organization of the conference was at an advanced stage: all invited speakers had been selected and all abstracts of contributed talks had been screened by the p...

  4. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  5. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  6. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  7. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  8. Further Generalisations of Twisted Gabidulin Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John

    2017-01-01

    We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....

  9. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  10. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  11. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  12. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  13. Improved lossless intra coding for H.264/MPEG-4 AVC.

    Science.gov (United States)

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  14. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  15. A bar-code reader for an alpha-beta automatic counting system - FAG

    International Nuclear Information System (INIS)

    Levinson, S.; Shemesh, Y.; Ankry, N.; Assido, H.; German, U.; Peled, O.

    1996-01-01

    A bar-code laser system for sample number reading was integrated into the FAG Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. Installation of the bar-code reader system required several modifications: Mechanical changes in the automatic sample changer, design and production of new sample holders, modification of the sample planchettes, changes in the electronic system, update of the operating software of the system (authors)

  16. A bar-code reader for an alpha-beta automatic counting system - FAG

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, S; Shemesh, Y; Ankry, N; Assido, H; German, U; Peled, O [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev

    1996-12-01

    A bar-code laser system for sample number reading was integrated into the FAG Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. Installation of the bar-code reader system required several modifications: Mechanical changes in the automatic sample changer, design and production of new sample holders, modification of the sample planchettes, changes in the electronic system, update of the operating software of the system (authors).

  17. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  18. A guide to the use of SUPERB code

    International Nuclear Information System (INIS)

    Jagannathan, V.; Jain, R.P.

    1983-01-01

    The SUPERB code has been developed for the neutronics design of a BWR fuel assembly. The code SUPERB provides the few group homogenised lattice parameters of the fuel box as a function of burnup for different voids, control and temperatures of fuel and moderators. These nuclear data form the basic input to subsequent steady state or transient core analyses. This report describes the modelling of a BWR fuel box with almost all the complexities like the poisoned pins and control blade. This illustration and a sample input included here should provide a first-hand acquaintance with the code SUPERB and its use. It is hoped that this report facilitates the use of the code SUPERB by a variety of users, the constructive feedback of whom is invaluable in not only improving the versatility but also removing any hitherto hidden infelicities of the code. (author)

  19. COMPBRN III: a computer code for modeling compartment fires

    International Nuclear Information System (INIS)

    Ho, V.; Siu, N.; Apostolakis, G.; Flanagan, G.F.

    1986-07-01

    The computer code COMPBRN III deterministically models the behavior of compartment fires. This code is an improvement of the original COMPBRN codes. It employs a different air entrainment model and numerical scheme to estimate properties of the ceiling hot gas layer model. Moreover, COMPBRN III incorporates a number of improvements in shape factor calculations and error checking, which distinguish it from the COMPBRN II code. This report presents the ceiling hot gas layer model employed by COMPBRN III as well as several other modifications. Information necessary to run COMPBRN III, including descriptions of required input and resulting output, are also presented. Simulation of experiments and a sample problem are included to demonstrate the usage of the code. 37 figs., 46 refs

  20. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  1. NETWORK CODING BY BEAM FORMING

    DEFF Research Database (Denmark)

    2013-01-01

    Network coding by beam forming in networks, for example, in single frequency networks, can provide aid in increasing spectral efficiency. When network coding by beam forming and user cooperation are combined, spectral efficiency gains may be achieved. According to certain embodiments, a method...... cooperating with the plurality of user equipment to decode the received data....

  2. Building codes : obstacle or opportunity?

    Science.gov (United States)

    Alberto Goetzl; David B. McKeever

    1999-01-01

    Building codes are critically important in the use of wood products for construction. The codes contain regulations that are prescriptive or performance related for various kinds of buildings and construction types. A prescriptive standard might dictate that a particular type of material be used in a given application. A performance standard requires that a particular...

  3. Accelerator Physics Code Web Repository

    CERN Document Server

    Zimmermann, Frank; Bellodi, G; Benedetto, E; Dorda, U; Giovannozzi, Massimo; Papaphilippou, Y; Pieloni, T; Ruggiero, F; Rumolo, G; Schmidt, F; Todesco, E; Zotter, Bruno W; Payet, J; Bartolini, R; Farvacque, L; Sen, T; Chin, Y H; Ohmi, K; Oide, K; Furman, M; Qiang, J; Sabbi, G L; Seidl, P A; Vay, J L; Friedman, A; Grote, D P; Cousineau, S M; Danilov, V; Holmes, J A; Shishlo, A; Kim, E S; Cai, Y; Pivi, M; Kaltchev, D I; Abell, D T; Katsouleas, Thomas C; Boine-Frankenheim, O; Franchetti, G; Hofmann, I; Machida, S; Wei, J

    2006-01-01

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic acceleratorphysics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  4. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  5. Interleaver Design for Turbo Coding

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Zyablov, Viktor

    1997-01-01

    By a combination of construction and random search based on a careful analysis of the low weight words and the distance properties of the component codes, it is possible to find interleavers for turbo coding with a high minimum distance. We have designed a block interleaver with permutations...

  6. Code breaking in the pacific

    CERN Document Server

    Donovan, Peter

    2014-01-01

    Covers the historical context and the evolution of the technically complex Allied Signals Intelligence (Sigint) activity against Japan from 1920 to 1945 Describes, explains and analyzes the code breaking techniques developed during the war in the Pacific Exposes the blunders (in code construction and use) made by the Japanese Navy that led to significant US Naval victories

  7. Development status of TUF code

    International Nuclear Information System (INIS)

    Liu, W.S.; Tahir, A.; Zaltsgendler

    1996-01-01

    An overview of the important development of the TUF code in 1995 is presented. The development in the following areas is presented: control of round-off error propagation, gas resolution and release models, and condensation induced water hammer. This development is mainly generated from station requests for operational support and code improvement. (author)

  8. Accident consequence assessment code development

    International Nuclear Information System (INIS)

    Homma, T.; Togawa, O.

    1991-01-01

    This paper describes the new computer code system, OSCAAR developed for off-site consequence assessment of a potential nuclear accident. OSCAAR consists of several modules which have modeling capabilities in atmospheric transport, foodchain transport, dosimetry, emergency response and radiological health effects. The major modules of the consequence assessment code are described, highlighting the validation and verification of the models. (author)

  9. The nuclear codes and guidelines

    International Nuclear Information System (INIS)

    Sonter, M.

    1984-01-01

    This paper considers problems faced by the mining industry when implementing the nuclear codes of practice. Errors of interpretation are likely. A major criticism is that the guidelines to the codes must be seen as recommendations only. They are not regulations. Specific clauses in the guidelines are criticised

  10. Survey of coded aperture imaging

    International Nuclear Information System (INIS)

    Barrett, H.H.

    1975-01-01

    The basic principle and limitations of coded aperture imaging for x-ray and gamma cameras are discussed. Current trends include (1) use of time varying apertures, (2) use of ''dilute'' apertures with transmission much less than 50%, and (3) attempts to derive transverse tomographic sections, unblurred by other planes, from coded images

  11. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  12. Grassmann codes and Schubert unions

    DEFF Research Database (Denmark)

    Hansen, Johan Peder; Johnsen, Trygve; Ranestad, Kristian

    2009-01-01

    We study subsets of Grassmann varieties over a field , such that these subsets are unions of Schubert cycles, with respect to a fixed flag. We study such sets in detail, and give applications to coding theory, in particular for Grassmann codes. For much is known about such Schubert unions with a ...

  13. On Network Coded Filesystem Shim

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2017-01-01

    Although network coding has shown the potential to revolutionize networking and storage, its deployment has faced a number of challenges. Usual proposals involve two approaches. First, deploying a new protocol (e.g., Multipath Coded TCP), or retrofitting another one (e.g., TCP/NC) to deliver bene...

  14. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  15. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  16. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  17. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  18. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  19. What Froze the Genetic Code?

    Directory of Open Access Journals (Sweden)

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  20. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  1. What Froze the Genetic Code?

    Science.gov (United States)

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  2. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  3. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  4. The PWR spectral code GELS. Pt. 1

    International Nuclear Information System (INIS)

    Penndorf, K.; Schult, F.; Schulz, G.

    1976-01-01

    The code procedures group constant libraries for the static PWR design of whatever fuel cycle - Uranium, Thorium, or Plutonium. The whole reach of temperatures is covered and the treatment of strong lumped absorbers as control or burnable poison pins is included. The main features are: 1) Good accuracy in spite of not fitting the material data to critical experiments; 2) speed and relatively low computer equipment; 3) restriction to PWR's only. In case of demands for higher accuracy there is a further restriction concerning the library data of the epithermal resonance absorbers: They are strictly valid only for several special lattice geometrics. Three samples are given each representing a typical application of the code. Two of them likewise are demonstrations of recalculated experiments. (orig.) [de

  5. Chemistry models in the Victoria code

    International Nuclear Information System (INIS)

    Grimley, A.J. III

    1988-01-01

    The VICTORIA Computer code consists of the fission product release and chemistry models for the MELPROG severe accident analysis code. The chemistry models in VICTORIA are used to treat multi-phase interactions in four separate physical regions: fuel grains, gap/open porosity/clad, coolant/aerosols, and structure surfaces. The physical and chemical environment of each region is very different from the others and different models are required for each. The common thread in the modelling is the use of a chemical equilibrium assumption. The validity of this assumption along with a description of the various physical constraints applicable to each region will be discussed. The models that result from the assumptions and constraints will be presented along with samples of calculations in each region

  6. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  7. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  8. Coding for effective denial management.

    Science.gov (United States)

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    Nearly everyone will agree that accurate and consistent coding of diagnoses and procedures is the cornerstone for operating a compliant practice. The CPT or HCPCS procedure code tells the payor what service was performed and also (in most cases) determines the amount of payment. The ICD-9-CM diagnosis code, on the other hand, tells the payor why the service was performed. If the diagnosis code does not meet the payor's criteria for medical necessity, all payment for the service will be denied. Implementation of an effective denial management program can help "stop the bleeding." Denial management is a comprehensive process that works in two ways. First, it evaluates the cause of denials and takes steps to prevent them. Second, denial management creates specific procedures for refiling or appealing claims that are initially denied. Accurate, consistent and compliant coding is key to both of these functions. The process of proactively managing claim denials also reveals a practice's administrative strengths and weaknesses, enabling radiology business managers to streamline processes, eliminate duplicated efforts and shift a larger proportion of the staff's focus from paperwork to servicing patients--all of which are sure to enhance operations and improve practice management and office morale. Accurate coding requires a program of ongoing training and education in both CPT and ICD-9-CM coding. Radiology business managers must make education a top priority for their coding staff. Front office staff, technologists and radiologists should also be familiar with the types of information needed for accurate coding. A good staff training program will also cover the proper use of Advance Beneficiary Notices (ABNs). Registration and coding staff should understand how to determine whether the patient's clinical history meets criteria for Medicare coverage, and how to administer an ABN if the exam is likely to be denied. Staff should also understand the restrictions on use of

  9. FIFPC, a fast ion Fokker--Planck code

    International Nuclear Information System (INIS)

    Fowler, R.H.; Callen, J.D.; Rome, J.A.; Smith, J.

    1976-07-01

    A computer code is described which solves the Fokker--Planck equation for the velocity space distribution of fast ions injected into a tokamak plasma. The numerical techniques are described and use of the code is outlined. The program is written in FORTRAN IV and is modularized in order to provide greater flexibility to the user. A program listing is provided and the results of sample cases are presented

  10. Colors and geometric forms in the work process information coding

    Directory of Open Access Journals (Sweden)

    Čizmić Svetlana

    2006-01-01

    Full Text Available The aim of the research was to establish the meaning of the colors and geometric shapes in transmitting information in the work process. The sample of 100 students connected 50 situations which could be associated with regular tasks in the work process with 12 colors and 4 geometric forms in previously chosen color. Based on chosen color-geometric shape-situation regulation, the idea of the research was to find out regularities in coding of information and to examine if those regularities can provide meaningful data assigned to each individual code and to explain which codes are better and applicable represents of examined situations.

  11. Periodic Boundary Conditions in the ALEGRA Finite Element Code

    International Nuclear Information System (INIS)

    Aidun, John B.; Robinson, Allen C.; Weatherby, Joe R.

    1999-01-01

    This document describes the implementation of periodic boundary conditions in the ALEGRA finite element code. ALEGRA is an arbitrary Lagrangian-Eulerian multi-physics code with both explicit and implicit numerical algorithms. The periodic boundary implementation requires a consistent set of boundary input sets which are used to describe virtual periodic regions. The implementation is noninvasive to the majority of the ALEGRA coding and is based on the distributed memory parallel framework in ALEGRA. The technique involves extending the ghost element concept for interprocessor boundary communications in ALEGRA to additionally support on- and off-processor periodic boundary communications. The user interface, algorithmic details and sample computations are given

  12. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  13. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    This paper describes the application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third problem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed. (Auth.)

  14. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed

  15. ESCADRE and ICARE code systems

    International Nuclear Information System (INIS)

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  16. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  17. Stability analysis by ERATO code

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi

    1979-12-01

    Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)

  18. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  19. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  20. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  1. Coded communications with nonideal interleaving

    Science.gov (United States)

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  2. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  3. GAMERA - The New Magnetospheric Code

    Science.gov (United States)

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  4. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  5. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  6. Color-Coding Politics

    Directory of Open Access Journals (Sweden)

    Benjamin Gross

    2013-02-01

    Full Text Available During the 2000 Presidential election between George H. W. Bush and Al Gore, journalists often used the terms blue states and red states to describe the political landscape within the United States. This article studies the framing of these terms during the years 2004 through 2007. Using latent and manifest qualitative content analyses, six different news media frames were found in a sample of 337 newspaper articles. Two hypotheses were also tested indicating that framing patterns varied slightly by time period and article types. However, the argument that increased levels of political polarization in the United States have been created by predominantly conflict-oriented coverage may not be true. Instead, these terms became journalistic heuristics that were used to organize how people think about politics in a way that fit with contemporary media practices, and there is no single agreed upon interpretation of these terms within this reporting.

  7. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  8. Coding chaotic billiards. Pt. 3

    International Nuclear Information System (INIS)

    Ullmo, D.; Giannoni, M.J.

    1993-01-01

    Non-tiling compact billiard defined on the pseudosphere is studied 'a la Morse coding'. As for most bounded systems, the coding is non exact. However, two sets of approximate grammar rules can be obtained, one specifying forbidden codes, and the other allowed ones. In-between some sequences remain in the 'unknown' zone, but their relative amount can be reduced to zero if one lets the length of the approximate grammar rules goes to infinity. The relationship between these approximate grammar rules and the 'pruning front' introduced by Cvitanovic et al. is discussed. (authors). 13 refs., 10 figs., 1 tab

  9. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  10. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  11. Writing the Live Coding Book

    DEFF Research Database (Denmark)

    Blackwell, Alan; Cox, Geoff; Lee, Sang Wong

    2016-01-01

    This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide range...... of personal, historical, technical and critical perspectives. This book project has been both experimental and reflective, in a manner that allows us to draw on critical understanding of both code and writing, and point to the potential for new practices in the future....

  12. LiveCode mobile development

    CERN Document Server

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  13. Network Coding Fundamentals and Applications

    CERN Document Server

    Medard, Muriel

    2011-01-01

    Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil

  14. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  15. Tree Coding of Bilevel Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...

  16. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  17. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  18. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  19. Allegheny County Zip Code Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the zip code boundaries that lie within Allegheny County.If viewing this description on the Western Pennsylvania Regional Data Center’s open...

  20. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  1. National Tribal Building Codes Summit

    Science.gov (United States)

    National Tribal Building Codes summit statement developed to support tribes interested in adopting green and culturally-appropriate building systems to ensure safe, sustainable, affordable, and culturally-appropriate buildings on tribal lands.

  2. Tracking Code for Microwave Instability

    International Nuclear Information System (INIS)

    Heifets, S.; SLAC

    2006-01-01

    To study microwave instability the tracking code is developed. For bench marking, results are compared with Oide-Yokoya results [1] for broad-band Q = 1 impedance. Results hint to two possible mechanisms determining the threshold of instability

  3. CRUCIB: an axisymmetric convection code

    International Nuclear Information System (INIS)

    Bertram, L.A.

    1975-03-01

    The CRUCIB code was written in support of an experimental program aimed at measurement of thermal diffusivities of refractory liquids. Precise values of diffusivity are necessary to realistic analysis of reactor safety problems, nuclear waste disposal procedures, and fundamental metal forming processes. The code calculates the axisymmetric transient convective motions produced in a right circular cylindrical crucible, which is surface heated by an annular heat pulse. Emphasis of this report is placed on the input-output options of the CRUCIB code, which are tailored to assess the importance of the convective heat transfer in determining the surface temperature distribution. Use is limited to Prandtl numbers less than unity; larger values can be accommodated by replacement of a single block of the code, if desired. (U.S.)

  4. QUIL: a chemical equilibrium code

    International Nuclear Information System (INIS)

    Lunsford, J.L.

    1977-02-01

    A chemical equilibrium code QUIL is described, along with two support codes FENG and SURF. QUIL is designed to allow calculations on a wide range of chemical environments, which may include surface phases. QUIL was written specifically to calculate distributions associated with complex equilibria involving fission products in the primary coolant loop of the high-temperature gas-cooled reactor. QUIL depends upon an energy-data library called ELIB. This library is maintained by FENG and SURF. FENG enters into the library all reactions having standard free energies of reaction that are independent of concentration. SURF enters all surface reactions into ELIB. All three codes are interactive codes written to be used from a remote terminal, with paging control provided. Plotted output is also available

  5. Electronic Code of Federal Regulations

    Data.gov (United States)

    National Archives and Records Administration — The Electronic Code of Federal Regulations (e-CFR) is the codification of the general and permanent rules published in the Federal Register by the executive...

  6. Zip Codes - MDC_WCSZipcode

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — The WCSZipcode polygon feature class was created by Miami-Dade Enterprise Technology Department to be used in the WCS batch jobs to assign the actual zip code of...

  7. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  8. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  9. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  10. Cost reducing code implementation strategies

    International Nuclear Information System (INIS)

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  11. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  12. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  13. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  14. NESTLE: A nodal kinetics code

    International Nuclear Information System (INIS)

    Al-Chalabi, R.M.; Turinsky, P.J.; Faure, F.-X.; Sarsour, H.N.; Engrand, P.R.

    1993-01-01

    The NESTLE nodal kinetics code has been developed for utilization as a stand-alone code for steady-state and transient reactor neutronic analysis and for incorporation into system transient codes, such as TRAC and RELAP. The latter is desirable to increase the simulation fidelity over that obtained from currently employed zero- and one-dimensional neutronic models and now feasible due to advances in computer performance and efficiency of nodal methods. As a stand-alone code, requirements are that it operate on a range of computing platforms from memory-limited personal computers (PCs) to supercomputers with vector processors. This paper summarizes the features of NESTLE that reflect the utilization and requirements just noted

  15. Description of the COMRADEX code

    International Nuclear Information System (INIS)

    Spangler, G.W.; Boling, M.; Rhoades, W.A.; Willis, C.A.

    1967-01-01

    The COMRADEX Code is discussed briefly and instructions are provided for the use of the code. The subject code was developed for calculating doses from hypothetical power reactor accidents. It permits the user to analyze four successive levels of containment with time-varying leak rates. Filtration, cleanup, fallout and plateout in each containment shell can also be analyzed. The doses calculated include the direct gamma dose from the containment building, the internal doses to as many as 14 organs including the thyroid, bone, lung, etc. from inhaling the contaminated air, and the external gamma doses from the cloud. While further improvements are needed, such as a provision for calculating doses from fallout, rainout and washout, the present code capabilities have a wide range of applicability for reactor accident analysis

  16. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  17. The EISCAT meteor code

    Directory of Open Access Journals (Sweden)

    G. Wannberg

    2008-08-01

    Full Text Available The EISCAT UHF system has the unique capability to determine meteor vector velocities from the head echo Doppler shifts measured at the three sites. Since even meteors spending a very short time in the common volume produce analysable events, the technique lends itself ideally to mapping the orbits of meteors arriving from arbitrary directions over most of the upper hemisphere. A radar mode optimised for this application was developed in 2001/2002. A specially selected low-sidelobe 32-bit pseudo-random binary sequence is used to binary phase shift key (BPSK the transmitted carrier. The baud-length is 2.4 μs and the receiver bandwidth is 1.6 MHz to accommodate both the resulting modulation bandwidth and the target Doppler shift. Sampling is at 0.6 μs, corresponding to 90-m range resolution. Target range and Doppler velocity are extracted from the raw data in a multi-step matched-filter procedure. For strong (SNR>5 events the Doppler velocity standard deviation is 100–150 m/s. The effective range resolution is about 30 m, allowing very accurate time-of-flight velocity estimates. On average, Doppler and time-of-flight (TOF velocities agree to within about one part in 103. Two or more targets simultaneously present in the beam can be resolved down to a range separation <300 m as long as their Doppler shifts differ by more than a few km/s.

  18. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  19. Verification of ONED90 code

    International Nuclear Information System (INIS)

    Chang, Jong Hwa; Lee, Ki Bog; Zee, Sung Kyun; Lee, Chang Ho

    1993-12-01

    ONED90 developed by KAERI is a 1-dimensional 2-group diffusion theory code. For nuclear design and reactor simulation, the usage of ONED90 encompasses core follow calculation, load follow calculation, plant power control simulation, xenon oscillation simulation and control rod maneuvering, etc. In order to verify the validity of ONED90 code, two well-known benchmark problems are solved by ONED90 shows very similar result to reference solution. (Author) 11 refs., 5 figs., 13 tabs

  20. Summary of Code of Ethics.

    Science.gov (United States)

    Eklund, Kerri

    2016-01-01

    The Guide to the Code of Ethics for Nurses is an excellent guideline for all nurses regardless of their area of practice. I greatly enjoyed reading the revisions in place within the 2015 edition and refreshing my nursing conscience. I plan to always keep my Guide to the Code of Ethics for Nurses near in order to keep my moral compass from veering off the path of quality care.

  1. UNIX code management and distribution

    International Nuclear Information System (INIS)

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process

  2. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  3. System Based Code: Principal Concept

    International Nuclear Information System (INIS)

    Yasuhide Asada; Masanori Tashimo; Masahiro Ueta

    2002-01-01

    This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)

  4. Development of Evaluation Code for MUF Uncertainty

    International Nuclear Information System (INIS)

    Won, Byung Hee; Han, Bo Young; Shin, Hee Sung; Ahn, Seong-Kyu; Park, Geun-Il; Park, Se Hwan

    2015-01-01

    Material Unaccounted For (MUF) is the material balance evaluated by measured nuclear material in a Material Balance Area (MBA). Assuming perfect measurements and no diversion from a facility, one can expect a zero MUF. However, non-zero MUF is always occurred because of measurement uncertainty even though the facility is under normal operation condition. Furthermore, there are many measurements using different equipment at various Key Measurement Points (KMPs), and the MUF uncertainty is affected by errors of those measurements. Evaluating MUF uncertainty is essentially required to develop safeguards system including nuclear measurement system in pyroprocessing, which is being developed for reducing radioactive waste from spent fuel in Korea Atomic Energy Research Institute (KAERI). The evaluation code for analyzing MUF uncertainty has been developed and it was verified using sample problem from the IAEA reference. MUF uncertainty can be simply and quickly calculated by using this evaluation code which is made based on graphical user interface for user friendly. It is also expected that the code will make the sensitivity analysis on the MUF uncertainty for the various safeguards systems easy and more systematic. It is suitable for users who want to evaluate the conventional safeguards system as well as to develop a new system for developing facilities

  5. Development of Evaluation Code for MUF Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Won, Byung Hee; Han, Bo Young; Shin, Hee Sung; Ahn, Seong-Kyu; Park, Geun-Il; Park, Se Hwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Material Unaccounted For (MUF) is the material balance evaluated by measured nuclear material in a Material Balance Area (MBA). Assuming perfect measurements and no diversion from a facility, one can expect a zero MUF. However, non-zero MUF is always occurred because of measurement uncertainty even though the facility is under normal operation condition. Furthermore, there are many measurements using different equipment at various Key Measurement Points (KMPs), and the MUF uncertainty is affected by errors of those measurements. Evaluating MUF uncertainty is essentially required to develop safeguards system including nuclear measurement system in pyroprocessing, which is being developed for reducing radioactive waste from spent fuel in Korea Atomic Energy Research Institute (KAERI). The evaluation code for analyzing MUF uncertainty has been developed and it was verified using sample problem from the IAEA reference. MUF uncertainty can be simply and quickly calculated by using this evaluation code which is made based on graphical user interface for user friendly. It is also expected that the code will make the sensitivity analysis on the MUF uncertainty for the various safeguards systems easy and more systematic. It is suitable for users who want to evaluate the conventional safeguards system as well as to develop a new system for developing facilities.

  6. SRAC95; general purpose neutronics code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Tsuchihashi, Keichiro; Kaneko, Kunio.

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author)

  7. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  8. The RETRAN-03 computer code

    International Nuclear Information System (INIS)

    Paulsen, M.P.; McFadden, J.H.; Peterson, C.E.; McClure, J.A.; Gose, G.C.; Jensen, P.J.

    1991-01-01

    The RETRAN-03 code development effort is designed to overcome the major theoretical and practical limitations associated with the RETRAN-02 computer code. The major objectives of the development program are to extend the range of analyses that can be performed with RETRAN, to make the code more dependable and faster running, and to have a more transportable code. The first two objectives are accomplished by developing new models and adding other models to the RETRAN-02 base code. The major model additions for RETRAN-03 are as follows: implicit solution methods for the steady-state and transient forms of the field equations; additional options for the velocity difference equation; a new steady-state initialization option for computer low-power steam generator initial conditions; models for nonequilibrium thermodynamic conditions; and several special-purpose models. The source code and the environmental library for RETRAN-03 are written in standard FORTRAN 77, which allows the last objective to be fulfilled. Some models in RETRAN-02 have been deleted in RETRAN-03. In this paper the changes between RETRAN-02 and RETRAN-03 are reviewed

  9. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  10. The EISCAT meteor code

    Directory of Open Access Journals (Sweden)

    G. Wannberg

    2008-08-01

    Full Text Available The EISCAT UHF system has the unique capability to determine meteor vector velocities from the head echo Doppler shifts measured at the three sites. Since even meteors spending a very short time in the common volume produce analysable events, the technique lends itself ideally to mapping the orbits of meteors arriving from arbitrary directions over most of the upper hemisphere.

    A radar mode optimised for this application was developed in 2001/2002. A specially selected low-sidelobe 32-bit pseudo-random binary sequence is used to binary phase shift key (BPSK the transmitted carrier. The baud-length is 2.4 μs and the receiver bandwidth is 1.6 MHz to accommodate both the resulting modulation bandwidth and the target Doppler shift. Sampling is at 0.6 μs, corresponding to 90-m range resolution. Target range and Doppler velocity are extracted from the raw data in a multi-step matched-filter procedure. For strong (SNR>5 events the Doppler velocity standard deviation is 100–150 m/s. The effective range resolution is about 30 m, allowing very accurate time-of-flight velocity estimates. On average, Doppler and time-of-flight (TOF velocities agree to within about one part in 103. Two or more targets simultaneously present in the beam can be resolved down to a range separation <300 m as long as their Doppler shifts differ by more than a few km/s.

  11. On the automated assessment of nuclear reactor systems code accuracy

    International Nuclear Information System (INIS)

    Kunz, Robert F.; Kasmala, Gerald F.; Mahaffy, John H.; Murray, Christopher J.

    2002-01-01

    of the relevant issues and techniques considered are addressed. Several of the methods have been coded and/or applied to relevant NRS code-data comparisons and these demonstration calculations are included. Next, an overview of the basic design, structure and operational mechanics of ACAP is provided. Then, a summary of the data pre-processing, data analysis and FOM assembly processing elements of the software is included. Lastly, a number of NRS sample applications are presented which illustrate the functionality of the code and its ability to provide objective accuracy measures

  12. Tribal Green Building Administrative Code Example

    Science.gov (United States)

    This Tribal Green Building Administrative Code Example can be used as a template for technical code selection (i.e., building, electrical, plumbing, etc.) to be adopted as a comprehensive building code.

  13. NOAA Weather Radio - EAS Event Codes

    Science.gov (United States)

    Non-Zero All Hazards Logo Emergency Alert Description Event Codes Fact Sheet FAQ Organization Search Coding Using SAME SAME Non-Zero Codes DOCUMENTS NWR Poster NWR Brochure NWR Brochure Printing Notes

  14. Geographic data: Zip Codes (Shape File)

    Data.gov (United States)

    Montgomery County of Maryland — This dataset contains all zip codes in Montgomery County. Zip codes are the postal delivery areas defined by USPS. Zip codes with mailboxes only are not included. As...

  15. KWIC Index of nuclear codes (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-01-01

    It is a KWIC Index for 254 nuclear codes in the Nuclear Code Abstracts (1975 edition). The classification of nuclear codes and the form of index are the same as those in the Computer Programme Library at Ispra, Italy. (auth.)

  16. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  17. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    International Nuclear Information System (INIS)

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  18. Coding in Stroke and Other Cerebrovascular Diseases.

    Science.gov (United States)

    Korb, Pearce J; Jones, William

    2017-02-01

    Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.

  19. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  20. System Design Description for the TMAD Code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System

  1. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  2. Development of probabilistic fracture mechanics code PASCAL and user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Katsuyuki; Onizawa, Kunio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Li, Yinsheng; Kato, Daisuke [Fuji Research Institute Corporation, Tokyo (Japan)

    2001-03-01

    As a part of the aging and structural integrity research for LWR components, a new PFM (Probabilistic Fracture Mechanics) code PASCAL (PFM Analysis of Structural Components in Aging LWR) has been developed since FY1996. This code evaluates the failure probability of an aged reactor pressure vessel subjected to transient loading such as PTS (Pressurized Thermal Shock). The development of the code has been aimed to improve the accuracy and reliability of analysis by introducing new analysis methodologies and algorithms considering the recent development in the fracture mechanics methodologies and computer performance. The code has some new functions in optimized sampling and cell dividing procedure in stratified Monte Carlo simulation, elastic-plastic fracture criterion of R6 method, extension analysis models in semi-elliptical crack, evaluation of effect of thermal annealing and etc. In addition, an input data generator of temperature and stress distribution time histories was also prepared in the code. Functions and performance of the code have been confirmed based on the verification analyses and some case studies on the influence parameters. The present phase of the development will be completed in FY2000. Thus this report provides the user's manual and theoretical background of the code. (author)

  3. User's manual for the NEFTRAN II computer code

    International Nuclear Information System (INIS)

    Olague, N.E.; Campbell, J.E.; Leigh, C.D.; Longsine, D.E.

    1991-02-01

    This document describes the NEFTRAN II (NEtwork Flow and TRANsport in Time-Dependent Velocity Fields) computer code and is intended to provide the reader with sufficient information to use the code. NEFTRAN II was developed as part of a performance assessment methodology for storage of high-level nuclear waste in unsaturated, welded tuff. NEFTRAN II is a successor to the NEFTRAN and NWFT/DVM computer codes and contains several new capabilities. These capabilities include: (1) the ability to input pore velocities directly to the transport model and bypass the network fluid flow model, (2) the ability to transport radionuclides in time-dependent velocity fields, (3) the ability to account for the effect of time-dependent saturation changes on the retardation factor, and (4) the ability to account for time-dependent flow rates through the source regime. In addition to these changes, the input to NEFTRAN II has been modified to be more convenient for the user. This document is divided into four main sections consisting of (1) a description of all the models contained in the code, (2) a description of the program and subprograms in the code, (3) a data input guide and (4) verification and sample problems. Although NEFTRAN II is the fourth generation code, this document is a complete description of the code and reference to past user's manuals should not be necessary. 19 refs., 33 figs., 25 tabs

  4. On Code Parameters and Coding Vector Representation for Practical RLNC

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    RLNC provides a theoretically efficient method for coding. The drawbacks associated with it are the complexity of the decoding and the overhead resulting from the encoding vector. Increasing the field size and generation size presents a fundamental trade-off between packet-based throughput...... to higher energy consumption. Therefore, the optimal trade-off is system and topology dependent, as it depends on the cost in energy of performing coding operations versus transmitting data. We show that moderate field sizes are the correct choice when trade-offs are considered. The results show that sparse...

  5. International assessment of PCA codes

    International Nuclear Information System (INIS)

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE

  6. The WIMS familly of codes

    International Nuclear Information System (INIS)

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  7. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  8. Comparison of sodium aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.

    1984-01-01

    Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)

  9. Distance sampling methods and applications

    CERN Document Server

    Buckland, S T; Marques, T A; Oedekoven, C S

    2015-01-01

    In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website.  Some of the case studies use the software Distance, while others use R code. The book is in three parts.  The first part addresses basic methods, the ...

  10. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  11. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  12. Applications guide to the MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1985-08-01

    A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed

  13. Repetition code of 15 qubits

    Science.gov (United States)

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  14. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  15. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  16. SASSYS LMFBR systems analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.

    1982-01-01

    The SASSYS code provides detailed steady-state and transient thermal-hydraulic analyses of the reactor core, inlet and outlet coolant plenums, primary and intermediate heat-removal systems, steam generators, and emergency shut-down heat removal systems in liquid-metal-cooled fast-breeder reactors (LMFBRs). The main purpose of the code is to analyze the consequences of failures in the shut-down heat-removal system and to determine whether this system can perform its mission adequately even with some of its components inoperable. The code is not plant-specific. It is intended for use with any LMFBR, using either a loop or a pool design, a once-through steam generator or an evaporator-superheater combination, and either a homogeneous core or a heterogeneous core with internal-blanket assemblies

  17. ASME Code Efforts Supporting HTGRs

    Energy Technology Data Exchange (ETDEWEB)

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  18. Allele coding in genomic evaluation

    DEFF Research Database (Denmark)

    Standen, Ismo; Christensen, Ole Fredslund

    2011-01-01

    Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker...... effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous...... genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call...

  19. Development of the DTNTES code

    International Nuclear Information System (INIS)

    Ortega Prieto, P.; Morales Dorado, M.D.; Alonso Santos, A.

    1987-01-01

    The DTNTES code has been developed in the Department of Nuclear Technology of the Polytechnical University in Madrid as a part of the Research Program on Quantitative Risk Analysis. DTNTES code calculates several time-dependent probabilistic characteristics of basic events, minimal cut sets and the top event of a fault tree. The code assumes that basic events are statistically independent, and they have failure and repair distributions. It computes the minimal cut upper bound approach for the top event unavailability, and the time-dependent unreliability of the top event by means of different methods, selected by the user. These methods are: expected number of system failures, failure rate, Barlow-Proschan bound, steady-state upper bound, and T* method. (author)

  20. The histone codes for meiosis.

    Science.gov (United States)

    Wang, Lina; Xu, Zhiliang; Khawar, Muhammad Babar; Liu, Chao; Li, Wei

    2017-09-01

    Meiosis is a specialized process that produces haploid gametes from diploid cells by a single round of DNA replication followed by two successive cell divisions. It contains many special events, such as programmed DNA double-strand break (DSB) formation, homologous recombination, crossover formation and resolution. These events are associated with dynamically regulated chromosomal structures, the dynamic transcriptional regulation and chromatin remodeling are mainly modulated by histone modifications, termed 'histone codes'. The purpose of this review is to summarize the histone codes that are required for meiosis during spermatogenesis and oogenesis, involving meiosis resumption, meiotic asymmetric division and other cellular processes. We not only systematically review the functional roles of histone codes in meiosis but also discuss future trends and perspectives in this field. © 2017 Society for Reproduction and Fertility.