WorldWideScience

Sample records for e codes

  1. Monte-Carlo code PARJET to simulate e+e--annihilation events via QCD jets

    International Nuclear Information System (INIS)

    Ritter, S.

    1983-01-01

    The Monte-Carlo code PARJET simulates exclusive hadronic final states produced in e + e - -annihilation via a virtual photon by two steps: (i) the fragmentation of the original quark-antiquark pair into further partons using results of perturbative QCD in the leading logarithmic approximation (LLA), and (ii) the transition of these parton jets into hadrons on the basis of a chain decay model. Program summary and code description are given. (author)

  2. MORSE - E. A new version of the MORSE code

    International Nuclear Information System (INIS)

    Ponti, C.; Heusden, R. van.

    1974-12-01

    This report describes a version of the MORSE code which has been written to facilitate the practical use of this programme. MORSE-E is a ready-to-use version that does not require particular programming efforts to adapt the code to the problem to be solved. It treats source volumes of different geometrical shapes. MORSE-E calculates the flux of particles as the sum of the paths travelled within a given volume; the corresponding relative errors are also provided

  3. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...

  4. HLA-E regulatory and coding region variability and haplotypes in a Brazilian population sample.

    Science.gov (United States)

    Ramalho, Jaqueline; Veiga-Castelli, Luciana C; Donadi, Eduardo A; Mendes-Junior, Celso T; Castelli, Erick C

    2017-11-01

    The HLA-E gene is characterized by low but wide expression on different tissues. HLA-E is considered a conserved gene, being one of the least polymorphic class I HLA genes. The HLA-E molecule interacts with Natural Killer cell receptors and T lymphocytes receptors, and might activate or inhibit immune responses depending on the peptide associated with HLA-E and with which receptors HLA-E interacts to. Variable sites within the HLA-E regulatory and coding segments may influence the gene function by modifying its expression pattern or encoded molecule, thus, influencing its interaction with receptors and the peptide. Here we propose an approach to evaluate the gene structure, haplotype pattern and the complete HLA-E variability, including regulatory (promoter and 3'UTR) and coding segments (with introns), by using massively parallel sequencing. We investigated the variability of 420 samples from a very admixed population such as Brazilians by using this approach. Considering a segment of about 7kb, 63 variable sites were detected, arranged into 75 extended haplotypes. We detected 37 different promoter sequences (but few frequent ones), 27 different coding sequences (15 representing new HLA-E alleles) and 12 haplotypes at the 3'UTR segment, two of them presenting a summed frequency of 90%. Despite the number of coding alleles, they encode mainly two different full-length molecules, known as E*01:01 and E*01:03, which corresponds to about 90% of all. In addition, differently from what has been previously observed for other non classical HLA genes, the relationship among the HLA-E promoter, coding and 3'UTR haplotypes is not straightforward because the same promoter and 3'UTR haplotypes were many times associated with different HLA-E coding haplotypes. This data reinforces the presence of only two main full-length HLA-E molecules encoded by the many HLA-E alleles detected in our population sample. In addition, this data does indicate that the distal HLA-E promoter is by

  5. Electrical safety code manual a plan language guide to national electrical code, OSHA and NFPA 70E

    CERN Document Server

    Keller, Kimberley

    2010-01-01

    Safety in any workplace is extremely important. In the case of the electrical industry, safety is critical and the codes and regulations which determine safe practices are both diverse and complicated. Employers, electricians, electrical system designers, inspectors, engineers and architects must comply with safety standards listed in the National Electrical Code, OSHA and NFPA 70E. Unfortunately, the publications which list these safety requirements are written in very technically advanced terms and the average person has an extremely difficult time understanding exactly what they need to

  6. Multi-Touch Tablets, E-Books, and an Emerging Multi-Coding/Multi-Sensory Theory for Reading Science E-Textbooks: Considering the Struggling Reader

    Science.gov (United States)

    Rupley, William H.; Paige, David D.; Rasinski, Timothy V.; Slough, Scott W.

    2015-01-01

    Pavio's Dual-Coding Theory (1991) and Mayer's Multimedia Principal (2000) form the foundation for proposing a multi-coding theory centered around Multi-Touch Tablets and the newest generation of e-textbooks to scaffold struggling readers in reading and learning from science textbooks. Using E. O. Wilson's "Life on Earth: An Introduction"…

  7. Analysis of the SPERT III E-core experiment using the EUREKA-2 code

    International Nuclear Information System (INIS)

    Harami, Taikan; Uemura, Mutsumi; Ohnishi, Nobuaki

    1986-09-01

    EUREKA-2, a coupled nuclear thermal hydrodynamic kinetic code, was adapted for the testing of models and methods. Code evaluations were made with the reactivity addition experiments of the SPERT III E-Core, a slightly enriched oxide core. The code was tested for non damaging power excursions including a wide range of initial operating conditions, such as cold-startup, hot-startup, hot-standby and operating-power initial conditions. Comparisons resulted in a good agreement within the experimental errors between calculated and experimental power, energy, reactivity and clad surface temperature. (author)

  8. Coincidence: Fortran code for calculation of (e, e'x) differential cross-sections, nuclear structure functions and polarization asymmetry in self-consistent random phase approximation with Skyrme interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cavinato, M.; Marangoni, M.; Saruis, A.M.

    1990-10-01

    This report describes the COINCIDENCE code written for the IBM 3090/300E computer in Fortran 77 language. The output data of this code are the (e, e'x) threefold differential cross-sections, the nuclear structure functions, the polarization asymmetry and the angular correlation coefficients. In the real photon limit, the output data are the angular distributions for plane polarized incident photons. The code reads from tape the transition matrix elements previously calculated, by in continuum self-consistent RPA (random phase approximation) theory with Skyrme interactions. This code has been used to perform a numerical analysis of coincidence (e, e'x) reactions with polarized electrons on the /sup 16/O nucleous.

  9. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  10. 24 CFR 200.926e - Supplemental information for use with the CABO One and Two Family Dwelling Code.

    Science.gov (United States)

    2010-04-01

    ... with the CABO One and Two Family Dwelling Code. 200.926e Section 200.926e Housing and Urban Development... PROGRAMS Minimum Property Standards § 200.926e Supplemental information for use with the CABO One and Two... Criteria of the CABO One and Two Family Dwelling Code. (a) Roof live loads. Roof slope 3 in 12 or less: 20...

  11. Numerical simulation of L.E.L. in Compton regime. Part II, GONDOLE, a three-dimensional code

    International Nuclear Information System (INIS)

    Deck, D.

    1992-07-01

    In the first part of this report, the BIWI two-dimensional numerical simulation code of L.E.L. in Compton regime has been described; the question was to simulate L.E.L. experiments in 'optical mode', that is to say for wavelengths of the order of one micron. The axisymmetric cylindrical geometry (r,z) of the BIWI code is adapted to these experiments. However, the increasingly frequent use of L.E.L. in the regime of microwaves requires the presence of a waveguide within the inverter, which breaks the cylindrical symmetry and forces us to adopt another geometry. On the other hand, the desire to take into account fields of inverters having a gradient in the direction transverse to the direction of propagation of the beam, and thus allowing various focalizations (quadrupole, parabolic, etc.), leads to work in Cartesian geometry. For these reasons (and for others that will appear later), the GONDOLE code has been written and is described in this note. The Gondole code is three-dimensional (x, y, z) and allows to simulate a large variety of L.E.L experiences. Then, all the inverter fields that the GONDOLE code takes into account are introduced. These fields are responsible for the existence of a current J(vector) perpendicular to the Z axis of propagation, and source of radiation. The dynamics of the electrons is then deduced, which derives directly from these fields, and it is shown to which equations of propagation of the laser wave each different J(vector) is coupling [fr

  12. Extensions of the 3-dimensional plasma transport code E3D

    International Nuclear Information System (INIS)

    Runov, A.; Schneider, R.; Kasilov, S.; Reiter, D.

    2004-01-01

    One important aspect of modern fusion research is plasma edge physics. Fluid transport codes extending beyond the standard 2-D code packages like B2-Eirene or UEDGE are under development. A 3-dimensional plasma fluid code, E3D, based upon the Multiple Coordinate System Approach and a Monte Carlo integration procedure has been developed for general magnetic configurations including ergodic regions. These local magnetic coordinates lead to a full metric tensor which accurately accounts for all transport terms in the equations. Here, we discuss new computational aspects of the realization of the algorithm. The main limitation to the Monte Carlo code efficiency comes from the restriction on the parallel jump of advancing test particles which must be small compared to the gradient length of the diffusion coefficient. In our problems, the parallel diffusion coefficient depends on both plasma and magnetic field parameters. Usually, the second dependence is much more critical. In order to allow long parallel jumps, this dependence can be eliminated in two steps: first, the longitudinal coordinate x 3 of local magnetic coordinates is modified in such a way that in the new coordinate system the metric determinant and contra-variant components of the magnetic field scale along the magnetic field with powers of the magnetic field module (like in Boozer flux coordinates). Second, specific weights of the test particles are introduced. As a result of increased parallel jump length, the efficiency of the code is about two orders of magnitude better. (copyright 2004 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  13. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  14. Calculation of Sodium Fire Test-I (Run-E6) using sodium combustion analysis code ASSCOPS version 2.0

    Energy Technology Data Exchange (ETDEWEB)

    Nakagiri, Toshio; Ohno, Shuji; Miyake, Osamu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1997-11-01

    The calculation of Sodium Fire Test-I (Run-E6) was performed using the ASSCOPS (Analysis of Simultaneous Sodium Combustions in Pool and Spray) code version 2.0 in order to determine the parameters used in the code for the calculations of sodium combustion behavior of small or medium scale sodium leak, and to validate the applicability of the code. The parameters used in the code were determined and the validation of the code was confirmed because calculated temperatures, calculated oxygen concentration and other calculated values almost agreed with the test results. (author)

  15. Futur d'une infrastructure de correction automatisée : CodeGradX

    OpenAIRE

    Queinnec , Christian; Pons , Olivier

    2016-01-01

    Cet article évoque quelques expériences et nouvelles idées au-tour d'une infrastructure de correction automatisée de programmes. CodeGradX est une infrastructure de correction automatisée de programmes. Elle est en service depuis 2008 (initialement sous le nom de FW4EX) et a depuis corrigé plus de 150 000 soumissions d'étudiants à des exercices principalement en Scheme, utilitaires d'Unix, JavaScript mais aussi en C, Octave, O'Caml, Python sans oublier les compétitions annuelles des Journées ...

  16. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  18. RCC-E a Design Code for I and C and Electrical Systems

    International Nuclear Information System (INIS)

    Haure, J.M.

    2015-01-01

    The paper deals with the stakes and strength of the RCC-E code applicable to Electrical and Instrumentation and control systems and components as regards dealing with safety class functions. The document is interlacing specifications between Owners, safety authorities, designers, and suppliers IAEA safety guides and IEC standards. The code is periodically updated and published by French Society for Design and Construction rules for Nuclear Island Components (AFCEN). The code is compliant with third generation PWR nuclear islands and aims to suit with national regulations as needed in a companion document. The Feedback experience of Fukushima and the licensing of UKEPR in the framework of Generic Design Assessment are lessons learnt that should be considered in the upgrading of the code. The code gathers a set of requirements and relevant good practices of several PWR design and construction practices related to the electrical and I and C systems and components, and electrical engineering documents dealing with systems, equipment and layout designs. Comprehensive statement including some recent developments will be provided about: - Offsite and onsite sources requirements including sources dealing the total loss of off sites and main onsite sources. - Highlights of a relevant protection level against high frequencies disturbances emitted by lightning strokes, Interfaces data used by any supplier or designer such as site data, rooms temperature, equipment maximum design temperature, alternative current and direct current electrical network voltages and frequency variation ranges, environmental conditions decoupling data, - Environmental Qualification process including normal, mild (earthquake resistant), harsh and severe accident ambient conditions. A suit made approach based on families, which are defined as a combination of mission time, duration and abnormal conditions (pressure, temperature, radiation), enables to better cope with Environmental Qualifications

  19. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  20. Erasmus MC at CLEF eHealth 2016: Concept recognition and coding in French texts

    NARCIS (Netherlands)

    E.M. Van Mulligen (Erik M.); Z. Afzal (Zubair); S.A. Akhondi (Saber); D. Vo (Dang); J.A. Kors (Jan)

    2016-01-01

    textabstractWe participated in task 2 of the CLEF eHealth 2016 chal-lenge. Two subtasks were addressed: entity recognition and normalization in a corpus of French drug labels and Medline titles, and ICD-10 coding of French death certificates. For both subtasks we used a dictionary-based approach.

  1. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  2. Embedded Systems Hardware Integration and Code Development for Maraia Capsule and E-MIST

    Science.gov (United States)

    Carretero, Emmanuel S.

    2015-01-01

    The cost of sending large spacecraft to orbit makes them undesirable for carrying out smaller scientific missions. Small spacecraft are more economical and can be tailored for missions where specific tasks need to be carried out, the Maraia capsule is such a spacecraft. Maraia will allow for samples of experiments conducted on the International Space Station to be returned to earth. The use of balloons to conduct experiments at the edge of space is a practical approach to reducing the large expense of using rockets. E-MIST is a payload designed to fly on a high altitude balloon. It can maintain science experiments in a controlled manner at the edge of space. The work covered here entails the integration of hardware onto each of the mentioned systems and the code associated with such work. In particular, the resistance temperature detector, pressure transducers, cameras, and thrusters for Maraia are discussed. The integration of the resistance temperature detectors and motor controllers to E-MIST is described. Several issues associated with sensor accuracy, code lock-up, and in-flight reset issues are mentioned. The solutions and proposed solutions to these issues are explained.

  3. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  4. Coding Instructions, Worksheets, and Keypunch Sheets for M.E.T.R.O.-APEX Simulation.

    Science.gov (United States)

    Michigan Univ., Ann Arbor. Environmental Simulation Lab.

    Compiled in this resource are coding instructions, worksheets, and keypunch sheets for use in the M.E.T.R.O.-APEX simulation, described in detail in documents ED 064 530 through ED 064 550. Air Pollution Exercise (APEX) is a computerized college and professional level "real world" simulation of a community with urban and rural problems, industrial…

  5. Codes maintained by the LAACG [Los Alamos Accelerator Code Group] at the NMFECC

    International Nuclear Information System (INIS)

    Wallace, R.; Barts, T.

    1990-01-01

    The Los Alamos Accelerator Code Group (LAACG) maintains two groups of design codes at the National Magnetic Fusion Energy Computing Center (NMFECC). These codes, principally electromagnetic field solvers, are used for the analysis and design of electromagnetic components for accelerators, e.g., magnets, rf structures, pickups, etc. In this paper, the status and future of the installed codes will be discussed with emphasis on an experimental version of one set of codes, POISSON/SUPERFISH

  6. R-matrix analysis code (RAC)

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Qi Huiquan

    1990-01-01

    A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented

  7. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    Science.gov (United States)

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  8. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  9. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  10. On Network Coded Filesystem Shim

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2017-01-01

    Although network coding has shown the potential to revolutionize networking and storage, its deployment has faced a number of challenges. Usual proposals involve two approaches. First, deploying a new protocol (e.g., Multipath Coded TCP), or retrofitting another one (e.g., TCP/NC) to deliver bene...

  11. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  12. Tribal Green Building Administrative Code Example

    Science.gov (United States)

    This Tribal Green Building Administrative Code Example can be used as a template for technical code selection (i.e., building, electrical, plumbing, etc.) to be adopted as a comprehensive building code.

  13. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  14. Mapping the Plasticity of the E. coli Genetic Code with Orthogonal Pair Directed Sense Codon Reassignment.

    Science.gov (United States)

    Schmitt, Margaret A; Biddle, Wil; Fisk, John Domenic

    2018-04-18

    The relative quantitative importance of the factors that determine the fidelity of translation is largely unknown, which makes predicting the extent to which the degeneracy of the genetic code can be broken challenging. Our strategy of using orthogonal tRNA/aminoacyl tRNA synthetase pairs to precisely direct the incorporation of a single amino acid in response to individual sense and nonsense codons provides a suite of related data with which to examine the plasticity of the code. Each directed sense codon reassignment measurement is an in vivo competition experiment between the introduced orthogonal translation machinery and the natural machinery in E. coli. This report discusses 20 new, related genetic codes, in which a targeted E. coli wobble codon is reassigned to tyrosine utilizing the orthogonal tyrosine tRNA/aminoacyl tRNA synthetase pair from Methanocaldococcus jannaschii. One at a time, reassignment of each targeted sense codon to tyrosine is quantified in cells by measuring the fluorescence of GFP variants in which the essential tyrosine residue is encoded by a non-tyrosine codon. Significantly, every wobble codon analyzed may be partially reassigned with efficiencies ranging from 0.8% to 41%. The accumulation of the suite of data enables a qualitative dissection of the relative importance of the factors affecting the fidelity of translation. While some correlation was observed between sense codon reassignment and either competing endogenous tRNA abundance or changes in aminoacylation efficiency of the altered orthogonal system, no single factor appears to predominately drive translational fidelity. Evaluation of relative cellular fitness in each of the 20 quantitatively-characterized proteome-wide tyrosine substitution systems suggests that at a systems level, E. coli is robust to missense mutations.

  15. Advanced thermohydraulic simulation code for transients in LMFBRs (SSC-L code)

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, A.K.

    1978-02-01

    Physical models for various processes that are encountered in preaccident and transient simulation of thermohydraulic transients in the entire liquid metal fast breeder reactor (LMFBR) plant are described in this report. A computer code, SSC-L, was written as a part of the Super System Code (SSC) development project for the ''loop''-type designs of LMFBRs. This code has the self-starting capability, i.e., preaccident or steady-state calculations are performed internally. These results then serve as the starting point for the transient simulation.

  16. Advanced thermohydraulic simulation code for transients in LMFBRs (SSC-L code)

    International Nuclear Information System (INIS)

    Agrawal, A.K.

    1978-02-01

    Physical models for various processes that are encountered in preaccident and transient simulation of thermohydraulic transients in the entire liquid metal fast breeder reactor (LMFBR) plant are described in this report. A computer code, SSC-L, was written as a part of the Super System Code (SSC) development project for the ''loop''-type designs of LMFBRs. This code has the self-starting capability, i.e., preaccident or steady-state calculations are performed internally. These results then serve as the starting point for the transient simulation

  17. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  18. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  19. The Dit nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1987-01-01

    DIT is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, which may be characterized by the spectrum and spatial calculations being performed in 2D and in a single job step for the entire assembly. The forerunner of this class of codes is the U.K.A.E.A. WIMS code, the first version of which was completed 25 years ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added which significantly influence the accuracy and performance of the resulting computational tool. This paper describes and discusses those features which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers

  20. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  1. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  2. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  3. User Effect on Code Application and Qualification Needs

    International Nuclear Information System (INIS)

    D'Auria, F.; Salah, A.B.

    2008-01-01

    Experience with some code assessment case studies and also additional ISPs have shown the dominant effect of the code user on the predicted system behavior. The general findings of the user effect investigations on some of the case studies indicate, specifically, that in addition to user effects, there are other reasons which affect the results of the calculations and are hidden under the general title of user effects. The specific characteristics of experimental facilities, i.e. limitations as far as code assessment is concerned; limitations of the used thermal-hydraulic codes to simulate certain system behavior or phenomena; limitations due to interpretation of experimental data by the code user, i.e. interpretation of experimental data base. On the basis of the discussions in this paper, the following conclusions and recommendations can be made: More dialogue appears to be necessary with the experimenters in the planning of code assessment calculations, e.g. ISPs.; User guidelines are not complete for the codes and the lack of sufficient and detailed user guidelines are observed with some of the case studies; More extensive user instruction and training, improved user guidelines, or quality assurance procedures may partially reduce some of the subjective user influence on the calculated results; The discrepancies between experimental data and code predictions are due both to the intrinsic code limit and to the so called user effects. There is a worthful need to quantify the percentage of disagreement due to the poor utilization of the code and due to the code itself. This need especially arises for the uncertainty evaluation studies (e.g. [18]) which do not take into account the mentioned user effects; A much focused investigation, based on the results of comparison calculations e.g. ISPs, analyzing the experimental data and the results of the specific code in order to evaluate the user effects and the related experimental aspects should be integral part of the

  4. On video formats and coding efficiency

    NARCIS (Netherlands)

    Bellers, E.B.; Haan, de G.

    2001-01-01

    This paper examines the efficiency of MPEG-2 coding for interlaced and progressive video, and compares de-interlacing and picture rate up-conversion before and after coding. We found receiver side de-interlacing and picture rate up-conversion (i.e. after coding) to give better image quality at a

  5. Mirage, a food chain transfer and dosimetric impact code in relation with atmospheric and liquid dispersion codes

    International Nuclear Information System (INIS)

    Van Dorpe, F.; Jourdain, F.

    2006-01-01

    Full text: The numerical code M.I.R.A.G.E. (Module of Radiological impact calculations on the Environment due to accidental or chronic nuclear releases through Aqueous and Gas media) has been developed to simulate the radionuclides transfer in the biosphere and food chains, as well as the dosimetric impact on man, after accidental or chronic releases in the environment by nuclear installations. The originality of M.I.R.A.G.E. is to propose a single tool chained downstream with various atmospheric and liquid dispersion codes. The code M.I.R.A.G.E. is a series of modules which makes it possible to carry out evaluations on the transfers in food chains and human dose impact. Currently, M.I.R.A.G.E. is chained with a Gaussian atmospheric dispersion code H.A.R.M.A.T.T.A.N. (Cea), a 3 D atmospheric dispersion code with Lagrangian model named M.I.N.E.R.V.E.-S.P.R.A.Y. (Aria Technology) and a 3 D groundwater transfer code named M.A.R.T.H.E. (B.R.G.M.). M.I.R.A.G.E. uses concentration or activity result files as initial data input for its calculations. The application initially calculates the concentrations in the various compartments of the environment (soils, plants, animals). The results are given in the shape of concentration and dose maps and also on a particular place called a reference group for dosimetric impact (like a village or a specific population group located around a nuclear installation). The input and output data of M.I.R.A.G.E. can have geographic coordinates and thus readable by a G.I.S. M.I.R.A.G. E.is an opened system with which it is easy to chain other codes of dispersion that those currently used. The calculations uncoupled with dispersion calculations are also possible by manual seizure of the dispersion data (contamination of a tablecloth, particular value in a point, etc.). M.I.R.A.G.E. takes into account soil deposits and resuspension phenomenon, transfers in plants and animals (choice of agricultural parameters, types of plants and animals, etc

  6. Office of Codes and Standards resource book. Section 1, Building energy codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Hattrup, M.P.

    1995-01-01

    The US Department of Energy`s (DOE`s) Office of Codes and Standards has developed this Resource Book to provide: A discussion of DOE involvement in building codes and standards; a current and accurate set of descriptions of residential, commercial, and Federal building codes and standards; information on State contacts, State code status, State building construction unit volume, and State needs; and a list of stakeholders in the building energy codes and standards arena. The Resource Book is considered an evolving document and will be updated occasionally. Users are requested to submit additional data (e.g., more current, widely accepted, and/or documented data) and suggested changes to the address listed below. Please provide sources for all data provided.

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  8. Computer and compiler effects on code results: status report

    International Nuclear Information System (INIS)

    1996-01-01

    Within the framework of the international effort on the assessment of computer codes, which are designed to describe the overall reactor coolant system (RCS) thermalhydraulic response, core damage progression, and fission product release and transport during severe accidents, there has been a continuous debate as to whether the code results are influenced by different code users or by different computers or compilers. The first aspect, the 'Code User Effect', has been investigated already. In this paper the other aspects will be discussed and proposals are given how to make large system codes insensitive to different computers and compilers. Hardware errors and memory problems are not considered in this report. The codes investigated herein are integrated code systems (e. g. ESTER, MELCOR) and thermalhydraulic system codes with extensions for severe accident simulation (e. g. SCDAP/RELAP, ICARE/CATHARE, ATHLET-CD), and codes to simulate fission product transport (e. g. TRAPMELT, SOPHAEROS). Since all of these codes are programmed in Fortran 77, the discussion herein is based on this programming language although some remarks are made about Fortran 90. Some observations about different code results by using different computers are reported and possible reasons for this unexpected behaviour are listed. Then methods are discussed how to avoid portability problems

  9. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  10. QR CODE IN LIBRARY PRACTICE SOME EXAMPLES

    OpenAIRE

    Ajay Shanker Mishra*, Sachin Kumar Umre, Pavan Kumar Gupta

    2017-01-01

    Quick Response (QR) code is one such technology which can cater to the user demand of providing access to resources through mobile. The main objective of this article to review the concept of Quick Response Code (QR code) and describe the practice of reading and generating QR codes. Research paper attempt to the basic concept, structure, technological pros and cons of the QR code. The literature is filled with potential uses for Quick Response (QR) codes in the library practices like e-resour...

  11. E-COMMERCE GOODY BAG SPUNBOND MENGGUNAKAN QR CODE BERBASIS WEB RESPONSIF

    Directory of Open Access Journals (Sweden)

    Rizkysari Meimaharani

    2014-11-01

    Full Text Available Abstrak Pemasaran merupakan salah satu hal yang terpenting bagi pelaku bisnis. Tanpa pemasaran yang baik, maka akan sulit membuat usaha lebih maju. Hal ini juga dialami oleh satu satu pelaku bisnis di kudus, yaitu vantacy shop. Vantacy shop merupakan salah satu usaha yang menjual goody bag (tas. Tidak hanya menjual tetapi juga mendesain sendiri tas tersebut. Pemasaran yang sudah dilakukan adalah door to door, artinya pemasaran konvensional. Dengan perkembangan teknologi sekarang ini, dalam penelitian ini ingin mencoba mengangkat vantacy shop dengan menciptakan pemasaran secara online. Penjualan online ini akan dilengkapi dengan penyebaran informasi mengenai goody bag menggunakan QR Code.QR Code merupakan teknologi yang sudah banyak diaplikasikan pada system operasi android. Jadi dengan adanya QR Code ini diharapkan konsumen lebih mudah memperoleh informasi mengenai apa yang ditawarkan dari vantacy shop. Selain itu website penjualan online ini berbasis web responsive, sehingga dapat diakses menggunakan semua gadget yang dimiliki oleh konsumen dengan tampilan yang baik. Kata kunci: web responsive, QR code, pemasaran

  12. pix2code: Generating Code from a Graphical User Interface Screenshot

    OpenAIRE

    Beltramelli, Tony

    2017-01-01

    Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

  13. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  14. The DIT nuclear fuel assembly physics design code

    International Nuclear Information System (INIS)

    Jonsson, A.

    1988-01-01

    The DIT code is the Combustion Engineering, Inc. (C-E) nuclear fuel assembly design code. It belongs to a class of codes, all similar in structure and strategy, that may be characterized by the spectrum and spatial calculations being performed in two dimensions and in a single job step for the entire assembly. The forerunner of this class of codes is the United Kingdom Atomic Energy Authority WIMS code, the first version of which was completed 25 yr ago. The structure and strategy of assembly spectrum codes have remained remarkably similar to the original concept thus proving its usefulness. As other organizations, including C-E, have developed their own versions of the concept, many important variations have been added that significantly influence the accuracy and performance of the resulting computational tool. Those features, which are unique to the DIT code and which might be of interest to the community of fuel assembly physics design code users and developers, are described and discussed

  15. On fuzzy semantic similarity measure for DNA coding.

    Science.gov (United States)

    Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin

    2016-02-01

    A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Using Peephole Optimization on Intermediate Code

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Staveren, H.; Stevenson, J.W.

    1982-01-01

    Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is

  17. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  18. Visual Coding of Human Bodies: Perceptual Aftereffects Reveal Norm-Based, Opponent Coding of Body Identity

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.

    2013-01-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…

  19. Remote-Handled Transuranic Content Codes

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions

    2006-12-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: • A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. • A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is “3.” The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  20. Abstraction carrying code and resource-awareness

    OpenAIRE

    Hermenegildo, Manuel V.; Albert Albiol, Elvira; López García, Pedro; Puebla Sánchez, Alvaro Germán

    2005-01-01

    Proof-Carrying Code (PCC) is a general approach to mobile code safety in which the code supplier augments the program with a certifícate (or proof). The intended benefit is that the program consumer can locally validate the certifícate w.r.t. the "untrusted" program by means of a certifícate checker—a process which should be much simpler, eíñcient, and automatic than generating the original proof. Abstraction Carrying Code (ACC) is an enabling technology for PCC in which an abstract mod...

  1. Ultrafast all-optical code-division multiple-access networks

    Science.gov (United States)

    Kwong, Wing C.; Prucnal, Paul R.; Liu, Yanming

    1992-12-01

    In optical code-division multiple access (CDMA), the architecture of optical encoders/decoders is another important factor that needs to be considered, besides the correlation properties of those already extensively studied optical codes. The architecture of optical encoders/decoders affects, for example, the amount of power loss and length of optical delays that are associated with code sequence generation and correlation, which, in turn, affect the power budget, size, and cost of an optical CDMA system. Various CDMA coding architectures are studied in the paper. In contrast to the encoders/decoders used in prime networks (i.e., prime encodes/decoders), which generate, select, and correlate code sequences by a parallel combination of fiber-optic delay-lines, and in 2n networks (i.e., 2n encoders/decoders), which generate and correlate code sequences by a serial combination of 2 X 2 passive couplers and fiber delays with sequence selection performed in a parallel fashion, the modified 2n encoders/decoders generate, select, and correlate code sequences by a serial combination of directional couplers and delays. The power and delay- length requirements of the modified 2n encoders/decoders are compared to that of the prime and 2n encoders/decoders. A 100 Mbit/s optical CDMA experiment in free space demonstrating the feasibility of the all-serial coding architecture using a serial combination of 50/50 beam splitters and retroreflectors at 10 Tchip/s (i.e., 100,000 chip/bit) with 100 fs laser pulses is reported.

  2. A Robust Cross Coding Scheme for OFDM Systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2010-01-01

    In wireless OFDM-based systems, coding jointly over all the sub-carriers simultaneously performs better than coding separately per sub-carrier. However, the joint coding is not always optimal because its achievable channel capacity (i.e. the maximum data rate) is inversely proportional to the

  3. RH-TRU Waste Content Codes

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions

    2007-07-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: • A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. • A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is “3.” The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  4. Cell-assembly coding in several memory processes.

    Science.gov (United States)

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  5. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  6. Identification of coding and non-coding mutational hotspots in cancer genomes.

    Science.gov (United States)

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from

  7. Cloning and over expression of non-coding RNA rprA in E.coli and its resistance to Kanamycin without osmotic shock.

    Science.gov (United States)

    Sahni, Azita; Hajjari, Mohammadreza; Raheb, Jamshid; Foroughmand, Ali Mohammad; Asgari, Morteza

    2017-01-01

    Recent reports have indicated that small RNAs have key roles in the response of the E.coli to stress and also in the regulating of virulence factors. It seems that some small non-coding RNAs are involved in multidrug resistance. Previous studies have indicated that rprA can increase the tolerance to Kanamycin in RcsB-deficient Escherichia coli K-12 following osmotic shock. The current study aims to clone and over-express the non-coding RNA rprA in E.coli and investigate its effect on the bacterial resistance to Kanamycin without any osmotic shock. For this purpose, rprA gene was amplified by the PCR and then cloned into the PET-28a (+) vector. The recombinant plasmid was transformed into wild type E.coli BL21 (DE3). The over expression was induced by IPTG and confirmed by qRT-PCR. The resistance to the kanamycin was then measured in different times by spectrophotometry. The statistical analysis showed that the rprA can increase the resistance to Kanamycin in Ecoli K12. The interaction between rprA and rpoS was reviewed and analyzed by in silico methods. The results showed that the bacteria with over-expressed rprA were more resistant to Kanamycin. The present study is an important step to prove the role of non-coding RNA rprA in bacterial resistance. The data can be the basis for future works and can also help to develop and deliver next-generation antibiotics.

  8. The WIMS familly of codes

    International Nuclear Information System (INIS)

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  9. WAM-E user's manual

    International Nuclear Information System (INIS)

    Rayes, L.G.; Riley, J.E.

    1986-07-01

    The WAM-E series of mainframe computer codes have been developed to efficiently analyze the large binary models (e.g., fault trees) used to represent the logic relationships within and between the systems of a nuclear power plant or other large, multisystem entity. These codes have found wide application in reliability and safety studies of nuclear power plant systems. There are now nine codes in the WAM-E series, with six (WAMBAM/WAMTAP, WAMCUT, WAMCUT-II, WAMFM, WAMMRG, and SPASM) classified as Type A Production codes and the other three (WAMFTP, WAMTOP, and WAMCONV) classified as Research codes. This document serves as a combined User's Guide, Programmer's Manual, and Theory Reference for the codes, with emphasis on the Production codes. To that end, the manual is divided into four parts: Part I, Introduction; Part II, Theory and Numerics; Part III, WAM-E User's Guide; and Part IV, WAMMRG Programmer's Manual

  10. Mobile Code: The Future of the Internet

    Science.gov (United States)

    1999-01-01

    code ( mobile agents) to multiple proxies or servers " Customization " (e.g., re-formatting, filtering, metasearch) Information overload Diversified... Mobile code is necessary, rather than client-side code, since many customization features (such as information monitoring) do not work if the...economic foundation for Web sites, many Web sites earn money solely from advertisements . If these sites allow mobile agents to easily access the content

  11. SCDAP/RELAP5 code development and assessment

    International Nuclear Information System (INIS)

    Allison, C.M.; Hohorst, J.K.

    1996-01-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The current version of the code is SCDAP/RELAP5/MOD3.1e. Although MOD3.1e contains a number of significant improvements since the initial version of MOD3.1 was released, new models to treat the behavior of the fuel and cladding during reflood have had the most dramatic impact on the code's calculations. This paper provides a brief description of the new reflood models, presents highlights of the assessment of the current version of MOD3.1, and discusses future SCDAP/RELAP5/MOD3.2 model development activities

  12. Coding visual features extracted from video sequences.

    Science.gov (United States)

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  13. Benchmark of the HDR E11.2 containment hydrogen mixing experiment using the MAAP4 code

    International Nuclear Information System (INIS)

    Lee, Sung, Jin; Paik, Chan Y.; Henry, R.E.

    1997-01-01

    The MAAP4 code was benchmarked against the hydrogen mixing experiment in a full-size nuclear reactor containment. This particular experiment, designated as E11.2, simulated a small loss-of-coolant-accident steam blowdown into the containment followed by the release of a hydrogen-helium gas mixture. It also incorporated external spray cooling of the steel dome near the end of the transient. Specifically, the objective of this bench-mark was to demonstrate that MAAP4, using subnodal physics, can predict an observed gas stratification in the containment

  14. SETI-EC: SETI Encryption Code

    Science.gov (United States)

    Heller, René

    2018-03-01

    The SETI Encryption code, written in Python, creates a message for use in testing the decryptability of a simulated incoming interstellar message. The code uses images in a portable bit map (PBM) format, then writes the corresponding bits into the message, and finally returns both a PBM image and a text (TXT) file of the entire message. The natural constants (c, G, h) and the wavelength of the message are defined in the first few lines of the code, followed by the reading of the input files and their conversion into 757 strings of 359 bits to give one page. Each header of a page, i.e. the little-endian binary code translation of the tempo-spatial yardstick, is calculated and written on-the-fly for each page.

  15. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  16. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  17. A study on the nuclear computer codes installation and management system

    International Nuclear Information System (INIS)

    Kim, Yeon Seung; Huh, Young Hwan; Kim, Hee Kyung; Kang, Byung Heon; Kim, Ko Ryeo; Suh, Soong Hyok; Choi, Young Gil; Lee, Jong Bok

    1990-12-01

    From 1987 a number of technical transfer related to nuclear power plant had been performed from C-E for YGN 3 and 4 construction. Among them, installation and management of the computer codes for YGN 3 and 4 fuel and nuclear steam supply system was one of the most important project. Main objectives of this project are to establish the nuclear computer code management system, to develop QA procedure for nuclear codes, to secure the nuclear code reliability and to extend techanical applicabilities including the user-oriented utility programs for nuclear codes. Contents of performing the project in this year was to produce 215 transmittal packages of nuclear codes installation including making backup magnetic tape and microfiche for software quality assurance. Lastly, for easy reference about the nuclear codes information we presented list of code names and information on the codes which were introduced from C-E. (Author)

  18. Authentication codes from ε-ASU hash functions with partially secret keys

    NARCIS (Netherlands)

    Liu, S.L.; Tilborg, van H.C.A.; Weng, J.; Chen, Kefei

    2014-01-01

    An authentication code can be constructed with a family of e-Almost strong universal (e-ASU) hash functions, with the index of hash functions as the authentication key. This paper considers the performance of authentication codes from e-ASU, when the authentication key is only partially secret. We

  19. SU-E-T-323: The FLUKA Monte Carlo Code in Ion Beam Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Rinaldi, I [Heidelberg University Hospital (Germany); Ludwig-Maximilian University Munich (Germany)

    2014-06-01

    Purpose: Monte Carlo (MC) codes are increasingly used in the ion beam therapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code demands accurate and reliable physical models for the transport and the interaction of all components of the mixed radiation field. This contribution will address an overview of the recent developments in the FLUKA code oriented to its application in ion beam therapy. Methods: FLUKA is a general purpose MC code which allows the calculations of particle transport and interactions with matter, covering an extended range of applications. The user can manage the code through a graphic interface (FLAIR) developed using the Python programming language. Results: This contribution will present recent refinements in the description of the ionization processes and comparisons between FLUKA results and experimental data of ion beam therapy facilities. Moreover, several validations of the largely improved FLUKA nuclear models for imaging application to treatment monitoring will be shown. The complex calculation of prompt gamma ray emission compares favorably with experimental data and can be considered adequate for the intended applications. New features in the modeling of proton induced nuclear interactions also provide reliable cross section predictions for the production of radionuclides. Of great interest for the community are the developments introduced in FLAIR. The most recent efforts concern the capability of importing computed-tomography images in order to build automatically patient geometries and the implementation of different types of existing positron-emission-tomography scanner devices for imaging applications. Conclusion: The FLUA code has been already chosen as reference MC code in many ion beam therapy centers, and is being continuously improved in order to match the needs of ion beam therapy applications. Parts of this work have been supported by the European

  20. Challenge: Code of environmental law; Herausforderung Umweltgesetzbuch

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-15

    Within the meeting ''Challenge: Code of environmental law'' at 16th February, 2007, in Berlin (Federal Republic of Germany) and organized by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (Berlin, Federal Republic of Germany), the following lectures were held: (a) the new code of environmental law as a contribution to more modernness and efficiency in the environmental politics (Sigmar Gabriel); (b) The code of environmental law from the view of the economy (Martin Wansleben); (c) Significance of the code of environmental law from the view of jurisprudence (Michael Kloepfer); (d) Targets, content and utility of the code of environmental law: Summary of the panel discussion (Tanja Goenner, Klaus Mittelbach, Juergen Resch, Hans-Joachim Koch, Alfred Wirtz, Andreas Troge (moderator)); (e) Considerations to the coding of water law in the code of environmental law (Helge Wendenburg); (f) Considerations to the coding of water law: Summary of te discussion; (g) Considerations to the coding of nature conservation law (Jochen Flasbarth); (h) Considerations to the coding of nature conservation law: Summary of the discussion.

  1. Simulation and verification studies of reactivity initiated accident by comparative approach of NK/TH coupling codes and RELAP5 code

    Energy Technology Data Exchange (ETDEWEB)

    Ud-Din Khan, Salah [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center; Peng, Minjun [Harbin Engineering Univ. (China). College of Nuclear Science and Technology; Yuntao, Song; Ud-Din Khan, Shahab [Chinese Academy of Sciences, Hefei (China). Inst. of Plasma Physics; Haider, Sajjad [King Saud Univ., Riyadh (Saudi Arabia). Sustainable Energy Technologies Center

    2017-02-15

    The objective is to analyze the safety of small modular nuclear reactors of 220 MWe power. Reactivity initiated accidents (RIA) were investigated by neutron kinetic/thermal hydraulic (NK/TH) coupling approach and thermal hydraulic code i.e., RELAP5. The results obtained by these approaches were compared for validation and accuracy of simulation. In the NK/TH coupling technique, three codes (HELIOS, REMARK, THEATRe) were used. These codes calculate different parameters of the reactor core (fission power, reactivity, fuel temperature and inlet/outlet temperatures). The data exchanges between the codes were assessed by running the codes simultaneously. The results obtained from both (NK/TH coupling) and RELAP5 code analyses complement each other, hence confirming the accuracy of simulation.

  2. Network Coding Protocols for Data Gathering Applications

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    Tunable sparse network coding (TSNC) with various sparsity levels of the coded packets and different feedback mechanisms is analysed in the context of data gathering applications in multi-hop networks. The goal is to minimize the completion time, i.e., the total time required to collect all data ...

  3. Verification of SACI-2 computer code comparing with experimental results of BIBLIS-A and LOOP-7 computer code

    International Nuclear Information System (INIS)

    Soares, P.A.; Sirimarco, L.F.

    1984-01-01

    SACI-2 is a computer code created to study the dynamic behaviour of a PWR nuclear power plant. To evaluate the quality of its results, SACI-2 was used to recalculate commissioning tests done in BIBLIS-A nuclear power plant and to calculate postulated transients for Angra-2 reactor. The results of SACI-2 computer code from BIBLIS-A showed as much good agreement as those calculated with the KWU Loop 7 computer code for Angra-2. (E.G.) [pt

  4. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  5. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  6. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  7. Linear and nonlinear verification of gyrokinetic microstability codes

    Science.gov (United States)

    Bravenec, R. V.; Candy, J.; Barnes, M.; Holland, C.

    2011-12-01

    Verification of nonlinear microstability codes is a necessary step before comparisons or predictions of turbulent transport in toroidal devices can be justified. By verification we mean demonstrating that a code correctly solves the mathematical model upon which it is based. Some degree of verification can be accomplished indirectly from analytical instability threshold conditions, nonlinear saturation estimates, etc., for relatively simple plasmas. However, verification for experimentally relevant plasma conditions and physics is beyond the realm of analytical treatment and must rely on code-to-code comparisons, i.e., benchmarking. The premise is that the codes are verified for a given problem or set of parameters if they all agree within a specified tolerance. True verification requires comparisons for a number of plasma conditions, e.g., different devices, discharges, times, and radii. Running the codes and keeping track of linear and nonlinear inputs and results for all conditions could be prohibitive unless there was some degree of automation. We have written software to do just this and have formulated a metric for assessing agreement of nonlinear simulations. We present comparisons, both linear and nonlinear, between the gyrokinetic codes GYRO [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and GS2 [W. Dorland, F. Jenko, M. Kotschenreuther, and B. N. Rogers, Phys. Rev. Lett. 85, 5579 (2000)]. We do so at the mid-radius for the same discharge as in earlier work [C. Holland, A. E. White, G. R. McKee, M. W. Shafer, J. Candy, R. E. Waltz, L. Schmitz, and G. R. Tynan, Phys. Plasmas 16, 052301 (2009)]. The comparisons include electromagnetic fluctuations, passing and trapped electrons, plasma shaping, one kinetic impurity, and finite Debye-length effects. Results neglecting and including electron collisions (Lorentz model) are presented. We find that the linear frequencies with or without collisions agree well between codes, as do the time averages of

  8. Codon Distribution in Error-Detecting Circular Codes

    Directory of Open Access Journals (Sweden)

    Elena Fimmel

    2016-03-01

    Full Text Available In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising result, it is shown that the codons can be separated into very few classes (three, or five, or six with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes.

  9. Codon Distribution in Error-Detecting Circular Codes.

    Science.gov (United States)

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  10. Electronic Code of Federal Regulations

    Data.gov (United States)

    National Archives and Records Administration — The Electronic Code of Federal Regulations (e-CFR) is the codification of the general and permanent rules published in the Federal Register by the executive...

  11. Development of ADINA-J-integral code

    International Nuclear Information System (INIS)

    Kurihara, Ryoichi

    1988-07-01

    A general purpose finite element program ADINA (Automatic Dynamic Incremental Nonlinear Analysis), which was developed by Bathe et al., was revised to be able to calculate the J- and J-integral. This report introduced the numerical method to add this capability to the code, and the evaluation of the revised ADINA-J code by using a few of examples of the J estimation model, i.e. a compact tension specimen, a center cracked panel subjected to dynamic load, and a thick shell cylinder having inner axial crack subjected to thermal load. The evaluation testified the function of the revised code. (author)

  12. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  13. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  14. MIFT: GIFT Combinatorial Geometry Input to VCS Code

    Science.gov (United States)

    1977-03-01

    r-w w-^ H ^ß0318is CQ BRL °RCUMr REPORT NO. 1967 —-S: ... MIFT: GIFT COMBINATORIAL GEOMETRY INPUT TO VCS CODE Albert E...TITLE (and Subtitle) MIFT: GIFT Combinatorial Geometry Input to VCS Code S. TYPE OF REPORT & PERIOD COVERED FINAL 6. PERFORMING ORG. REPORT NUMBER...Vehicle Code System (VCS) called MORSE was modified to accept the GIFT combinatorial geometry package. GIFT , as opposed to the geometry package

  15. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  16. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    International Nuclear Information System (INIS)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S

    2014-01-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm 3 , 0.001 g/cm 3 ) in a 10×10×50 cm 3 water phantom (1 g/cm 3 ). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response

  17. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.

  18. Code-code comparisons of DIVIMP's 'onion-skin model' and the EDGE2D fluid code

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Elder, J.D.; Horton, L.D.; Simonini, R.; Taroni, A.; Matthews, O.F.; Monk, R.D.

    1997-01-01

    In onion-skin modelling, O-SM, of the edge plasma, the cross-field power and particle flows are treated very simply e.g. as spatially uniform. The validity of O-S modelling requires demonstration that such approximations can still result in reasonable solutions for the edge plasma. This is demonstrated here by comparison of O-SM with full 2D fluid edge solutions generated by the EDGE2D code. The target boundary conditions for the O-SM are taken from the EDGE2D output and the complete O-SM solutions are then compared with the EDGE2D ones. Agreement is generally within 20% for n e , T e , T i and parallel particle flux density Γ for the medium and high recycling JET cases examined and somewhat less good for a strongly detached CMOD example. (orig.)

  19. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  20. RH-TRU Waste Content Codes (RH TRUCON)

    International Nuclear Information System (INIS)

    2007-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: (1) A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. (2) A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is ''3''. The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  1. RH-TRU Waste Content Codes (RH TRUCON)

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions

    2007-05-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: • A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. • A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is “3.” The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  2. Network coding at different layers in wireless networks

    CERN Document Server

    2016-01-01

    This book focuses on how to apply network coding at different layers in wireless networks – including MAC, routing, and TCP – with special focus on cognitive radio networks. It discusses how to select parameters in network coding (e.g., coding field, number of packets involved, and redundant information ration) in order to be suitable for the varying wireless environments. The book explores how to deploy network coding in MAC to improve network performance and examines joint network coding with opportunistic routing to improve the successful rate of routing. In regards to TCP and network coding, the text considers transport layer protocol working with network coding to overcome the transmission error rate, particularly with how to use the ACK feedback of TCP to enhance the efficiency of network coding. The book pertains to researchers and postgraduate students, especially whose interests are in opportunistic routing and TCP in cognitive radio networks.

  3. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  4. GreCo : Green code of ethics

    NARCIS (Netherlands)

    Moraga, Ma Ángeles; García-Rodríguez de Guzmán, Ignacio; Calero, Coral; Johann, Timo; Me, Gianantonio; Münzel, Harald; Kindelsberger, Julia

    2017-01-01

    Background: Codes of ethics (CoE) are widely adopted in several professional areas, including that of Software Engineering. However, contemporary CoE do not pay sufficient attention to one of the most important trends to have appeared in the last years environmental issues. Aim: The aim of this

  5. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  6. Extraction of state machines of legacy C code with Cpp2XMI

    NARCIS (Netherlands)

    Brand, van den M.G.J.; Serebrenik, A.; Zeeland, van D.; Serebrenik, A.

    2008-01-01

    Analysis of legacy code is often focussed on extracting either metrics or relations, e.g. call relations or structure relations. For object-oriented programs, e.g. Java or C++ code, such relations are commonly represented as UML diagrams: e.g., such tools as Columbus [1] and Cpp2XMI [2] are capable

  7. First analysis of AGS0, LT2 and E9 CABRI tests with the new SFR safety code ASTEC-Na

    International Nuclear Information System (INIS)

    Perez-Martin, Sara; Bandini, Giacomino; Matuzas, Vaidas; Buck, Michael; Girault, Nathalie

    2015-01-01

    Within the framework of the European JASMIN project, the ASTEC-Na code is being developed for safety analysis of severe accidents in SFR. In the first phase of validation of the ASTEC-Na fuel thermo-mechanical models three in-pile tests conducted in the CABRI experimental reactor have been selected to be analysed. We present here the preliminary results of the simulation of two Transient Over Power tests and one power ramp test (AGS0, LT2 and E9, respectively) where no pin failure occurred during the transient. We present the comparison of ASTEC-Na results against experimental data and other safety code results for the initial steady state conditions prior to the transient onset as well as for the fuel pin behaviour during the transients. (author)

  8. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  9. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option

  10. RELAP-7 Code Assessment Plan and Requirement Traceability Matrix

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.

    2016-10-01

    The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, and technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.

  11. RH-TRU Waste Content Codes (RH-Trucon)

    International Nuclear Information System (INIS)

    2007-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is '3.' The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR limits based

  12. RH-TRU Waste Content Codes (RH-TRUCON)

    International Nuclear Information System (INIS)

    2007-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is '3.' The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR limits based

  13. RH-TRU Waste Content Codes (RH-TRUCON)

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2007-08-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: • A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. • A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is “3.” The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  14. RH-TRU Waste Content Codes (RH-TRUCON)

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions

    2007-05-30

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC).1 The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: • A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. • A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is “3.” The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  15. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  16. AutoBayes/CC: Combining Program Synthesis with Automatic Code Certification: System Description

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Code certification is a lightweight approach to formally demonstrate software quality. It concentrates on aspects of software quality that can be defined and formalized via properties, e.g., operator safety or memory safety. Its basic idea is to require code producers to provide formal proofs that their code satisfies these quality properties. The proofs serve as certificates which can be checked independently, by the code consumer or by certification authorities, e.g., the FAA. It is the idea underlying such approaches as proof-carrying code [6]. Code certification can be viewed as a more practical version of traditional Hoare-style program verification. The properties to be verified are fairly simple and regular so that it is often possible to use an automated theorem prover to automatically discharge all emerging proof obligations. Usually, however, the programmer must still splice auxiliary annotations (e.g., loop invariants) into the program to facilitate the proofs. For complex properties or larger programs this quickly becomes the limiting factor for the applicability of current certification approaches.

  17. Irreducible normalizer operators and thresholds for degenerate quantum codes with sublinear distances

    Science.gov (United States)

    Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.

    2015-03-01

    We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.

  18. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  19. The Classification of Complementary Information Set Codes of Lengths 14 and 16

    OpenAIRE

    Freibert, Finley

    2012-01-01

    In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\\'{e} defined a new class of rate one-half binary codes called \\emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they m...

  20. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  1. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  2. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  3. BIRTH: a beam deposition code for non-circular tokamak plasmas

    International Nuclear Information System (INIS)

    Otsuka, Michio; Nagami, Masayuki; Matsuda, Toshiaki

    1982-09-01

    A new beam deposition code has been developed which is capable of calculating fast ion deposition profiles including the orbit correction. The code incorporates any injection geometry and a non-circular cross section plasma with a variable elongation and an outward shift of the magnetic flux surface. Typical cpu time on a DEC-10 computer is 10 - 20 seconds and 5 - 10 seconds with and without the orbit correction, respectively. This is shorter by an order of magnitude than that of other codes, e.g., Monte Carlo codes. The power deposition profile calculated by this code is in good agreement with that calculated by a Monte Carlo code. (author)

  4. Remote-Handled Transuranic Waste Content Codes (RH-Trucon)

    International Nuclear Information System (INIS)

    2006-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document describes the inventory of RH-TRU waste within the transportation parameters specified by the Remote-Handled Transuranic Waste Authorized Methods for Payload Control (RH-TRAMPAC). The RH-TRAMPAC defines the allowable payload for the RH-TRU 72-B. This document is a catalog of RH-TRU 72-B authorized contents by site. A content code is defined by the following components: (1) A two-letter site abbreviation that designates the physical location of the generated/stored waste (e.g., ID for Idaho National Laboratory [INL]). The site-specific letter designations for each of the sites are provided in Table 1. (2) A three-digit code that designates the physical and chemical form of the waste (e.g., content code 317 denotes TRU Metal Waste). For RH-TRU waste to be transported in the RH-TRU 72-B, the first number of this three-digit code is ''3''. The second and third numbers of the three-digit code describe the physical and chemical form of the waste. Table 2 provides a brief description of each generic code. Content codes are further defined as subcodes by an alpha trailer after the three-digit code to allow segregation of wastes that differ in one or more parameter(s). For example, the alpha trailers of the subcodes ID 322A and ID 322B may be used to differentiate between waste packaging configurations. As detailed in the RH-TRAMPAC, compliance with flammable gas limits may be demonstrated through the evaluation of compliance with either a decay heat limit or flammable gas generation rate (FGGR) limit per container specified in approved content codes. As applicable, if a container meets the watt*year criteria specified by the RH-TRAMPAC, the decay heat limits based on the dose-dependent G value may be used as specified in an approved content code. If a site implements the administrative controls outlined in the RH-TRAMPAC and Appendix 2.4 of the RH-TRU Payload Appendices, the decay heat or FGGR

  5. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  6. Coupled geochemical and solute transport code development

    International Nuclear Information System (INIS)

    Morrey, J.R.; Hostetler, C.J.

    1985-01-01

    A number of coupled geochemical hydrologic codes have been reported in the literature. Some of these codes have directly coupled the source-sink term to the solute transport equation. The current consensus seems to be that directly coupling hydrologic transport and chemical models through a series of interdependent differential equations is not feasible for multicomponent problems with complex geochemical processes (e.g., precipitation/dissolution reactions). A two-step process appears to be the required method of coupling codes for problems where a large suite of chemical reactions must be monitored. Two-step structure requires that the source-sink term in the transport equation is supplied by a geochemical code rather than by an analytical expression. We have developed a one-dimensional two-step coupled model designed to calculate relatively complex geochemical equilibria (CTM1D). Our geochemical module implements a Newton-Raphson algorithm to solve heterogeneous geochemical equilibria, involving up to 40 chemical components and 400 aqueous species. The geochemical module was designed to be efficient and compact. A revised version of the MINTEQ Code is used as a parent geochemical code

  7. Nucleotide sequence of the Escherichia coli pyrE gene and of the DNA in front of the protein-coding region

    DEFF Research Database (Denmark)

    Poulsen, Peter; Jensen, Kaj Frank; Valentin-Hansen, Poul

    1983-01-01

    leader segment in front of the protein-coding region. This leader contains a structure with features characteristic for a (translated?) rho-independent transcriptional terminator, which is preceded by a cluster of uridylate residues. This indicates that the frequency of pyrE transcription is regulated......Orotate phosphoribosyltransferase (EC 2.4.2.10) was purified to electrophoretic homogeneity from a strain of Escherichia coli containing the pyrE gene cloned on a multicopy plasmid. The relative molecular masses (Mr) of the native enzyme and its subunit were estimated by means of gel filtration...

  8. Ethical and educational considerations in coding hand surgeries.

    Science.gov (United States)

    Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E

    2014-07-01

    To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  9. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  10. Wind power within European grid codes: Evolution, status and outlook

    DEFF Research Database (Denmark)

    Vrana, Til Kristian; Flynn, Damian; Gomez-Lazaro, Emilio

    2018-01-01

    Grid codes are technical specifications that define the requirements for any facility connected to electricity grids. Wind power plants are increasingly facing system stability support requirements similar to conventional power stations, which is to some extent unavoidable, as the share of wind...... power in the generation mix is growing. The adaptation process of grid codes for wind power plants is not yet complete, and grid codes are expected to evolve further in the future. ENTSO-E is the umbrella organization for European TSOs, seen by many as a leader in terms of requirements sophistication...... is largely based on the definitions and provisions set out by ENTSO-E. The main European grid code requirements are outlined here, including also HVDC connections and DC-connected power park modules. The focus is on requirements that are considered particularly relevant for large wind power plants...

  11. Criticality qualification of a new Monte Carlo code for reactor core analysis

    International Nuclear Information System (INIS)

    Catsaros, N.; Gaveau, B.; Jaekel, M.; Maillard, J.; Maurel, G.; Savva, P.; Silva, J.; Varvayanni, M.; Zisis, Th.

    2009-01-01

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  12. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  13. Measurement of reactivity coefficients for code validation

    International Nuclear Information System (INIS)

    Nuding, Matthias; Loetsch, Thomas

    2005-01-01

    In the year 2003 measurements in the cold reactor state have been performed at the NPP KKI 2 in order to validate the codes that are used for reactor core calculations and especially for the proof of the shutdown margin that is produced by calculations only. For full power states code verification is quite easy because the calculations can be compared with different measured values, e.g. with the activation values determined by the aeroball system. For cold reactor states, however the data base is smaller, especially for reactor cores that are quite 'inhomogeneous' and have rather high Pu-fiss-and 235 U-contents. At the same time the cold reactor state is important regarding the shutdown margin. For these reasons the measurements mentioned above have been performed in order to check the accuracy of the codes that are used by the operator and by our organization for many years. Basically, boron concentrations and control rod worths for different configurations have been measured. The results of the calculation show a very good agreement with the measured values. Therefore, it can be stated that the operator's as well as our code system is suitable for routine use, e.g. during licensing procedures (Authors)

  14. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  15. Hide and Seek: Exploiting and Hardening Leakage-Resilient Code Randomization

    Science.gov (United States)

    2016-05-30

    HMACs generated us- ing 128-bit AES encryption . We do not use AES en- cryption to generate HMACs due to its high overhead; the authors of CCFI report...execute-only permissions on memory accesses, (ii) code pointer hid- ing (e.g., indirection or encryption ), and (iii) decoys (e.g., booby traps). Among...lowing techniques: they a) enforce execute-only permis- sions on code pages to mitigate direct information leak- age, b) introduce an encryption or

  16. Hamor-2: a computer code for LWR inventory calculation

    International Nuclear Information System (INIS)

    Guimaraes, L.N.F.; Marzo, M.A.S.

    1985-01-01

    A method for calculating the accuracy inventory of LWR reactors is presented. This method uses the Hamor-2 computer code. Hamor-2 is obtained from the coupling of two other computer codes Hammer-Techion and Origen-2 for testing Hamor-2, its results were compared to concentration values measured from activides of two PWR reactors; Kernkraftwerk Obrighein (KWO) and H.B. Robinson (HBR). These actinides are U 235 , U 236 , U 238 , Pu 239 , Pu 241 and PU 242 . The computer code Hammor-2 shows better results than the computer code Origem-2, when both are compared with experimental results. (E.G.) [pt

  17. Applications of the Los Alamos High Energy Transport code

    International Nuclear Information System (INIS)

    Waters, L.; Gavron, A.; Prael, R.E.

    1992-01-01

    Simulation codes reliable through a large range of energies are essential to analyze the environment of vehicles and habitats proposed for space exploration. The LAHET monte carlo code has recently been expanded to track high energy hadrons with FLUKA, while retaining the original Los Alamos version of HETC at lower energies. Electrons and photons are transported with EGS4, and an interface to the MCNP monte carlo code is provided to analyze neutrons with kinetic energies less than 20 MeV. These codes are benchmarked by comparison of LAHET/MCNP calculations to data from the Brookhaven experiment E814 participant calorimeter

  18. The SWAN coupling code: user's guide

    International Nuclear Information System (INIS)

    Litaudon, X.; Moreau, D.

    1988-11-01

    Coupling of slow waves in a plasma near the lower hybrid frequency is well known and linear theory with density step followed by a constant gradient can be used with some confidence. With the aid of the computer code SWAN, which stands for 'Slow Wave Antenna', the following parameters can be numerically calculated: n parallel power spectrum, directivity (weighted by the current drive efficiency), reflection coefficients (amplitude and phase) both before and after the E-plane junctions, scattering matrix at the plasma interface, scattering matrix at the E-plane junctions, maximum electric fields in secondary waveguides and location where it occurs, effect of passive waveguides on each side of the antenna, and the effect of a finite magnetic field in front of the antenna (for homogeneous plasma). This manual gives the basic information on the main assumptions of the coupling theory and on the use and general structure of the code itself. It answers the questions what are the main assumptions of the physical model? how to execute a job? what are the input parameters of the code? and what are the output results and where are they written? (author)

  19. Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.

    Science.gov (United States)

    Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C

    Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved documentation and billing awareness by the residents. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  1. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  2. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  3. Automatic coding of online collaboration protocols

    NARCIS (Netherlands)

    Erkens, Gijsbert; Janssen, J.J.H.M.

    2006-01-01

    An automatic coding procedure is described to determine the communicative functions of messages in chat discussions. Five main communicative functions are distinguished: argumentative (indicating a line of argumentation or reasoning), responsive (e.g., confirmations, denials, and answers),

  4. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  5. A multiplex coding imaging spectrometer for X-ray astronomy

    International Nuclear Information System (INIS)

    Rocchia, R.; Deschamps, J.Y.; Koch-Miramond, L.; Tarrius, A.

    1985-06-01

    The paper describes a multiplex coding system associated with a solid state spectrometer Si(Li) designed to be placed at the focus of a grazing incidence telescope. In this instrument the spectrometric and imaging functions are separated. The coding system consists in a movable mask with pseudo randomly distributed holes, located in the focal plane of the telescope. The pixel size lies in the range 100-200 microns. The close association of the coding system with a Si(Li) detector gives an imaging spectrometer combining the good efficiency (50% between 0,5 and 10 keV) and energy resolution (ΔE approximately 90 to 160 eV) of solid state spectrometers with the spatial resolution of the mask. Simulations and results obtained with a laboratory model are presented

  6. Assessing the INTERTRAN code for application in Asian environs

    International Nuclear Information System (INIS)

    Yoshimura, S.

    1986-10-01

    A Japanese study, which was carried out as part of the IAEA Coordinated Research Programme on Radiation Protection Implications of Transport Accidents Involving Radioactive Materials, provided evaluations of transport conditions of nuclear fuel in Japan. Nuclear fuel is transported in Japan in the form of UO 2 , UF 6 , fresh fuel assemblies and spent fuel. Based on these transport conditions calculations were made using the INTERTRAN code which was developed as part of the IAEA Coordinated Research Programme on Safe Transport of Radioactive Materials (1980-1985), for assessing doses to workers and to the public due to the transport of nuclear fuel. As a part of the study, a new code was developed for evaluating radiological impacts of the transport of radioactive materials. The code was also used for assessing doses from the transport of nuclear fuel in Japan. The results indicate that doses to workers and to the public due to the incident-free transport of nuclear fuel are low, i.e., of the order of 1-30 man mSv/100 km. The doses calculated by the Japanese code were in general slightly smaller than the doses calculated using the INTERTRAN code. The study concerned normal conditions of transport, i.e., no impact from incidents or accidents was evaluated. The study resulted, in addition, in some suggestions for further developing the INTERTRAN code

  7. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  8. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  9. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  10. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...

  11. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  12. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  13. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  14. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael Mohammed Abdullah

    2016-08-15

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  15. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  16. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  17. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  18. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  19. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    Science.gov (United States)

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  20. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  1. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  2. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  3. Utilization of KENO-IV computer code with HANSEN-ROACH library

    International Nuclear Information System (INIS)

    Lima Barros, M. de; Vellozo, S.O.

    1982-01-01

    Several analysis with KENO-IV computer code, which is based in the Monte Carlo method, and the cross section library HANSEN-ROACH, were done, aiming to present the more convenient form to execute criticality calculations with this computer code and this cross sections. (E.G.) [pt

  4. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  5. Attention in Relation to Coding and Planning in Reading

    Science.gov (United States)

    Mahapatra, Shamita

    2015-01-01

    A group of 50 skilled readers and a group of 50 less-skilled readers of Grade 5 matched for age and intelligence and selected on the basis of their proficiency in reading comprehension were tested for their competence in word reading and the processes of attention, simultaneous coding, successive coding and planning at three levels, i.e.,…

  6. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  7. Mother code specifications (Appendix to CEA report 2472)

    International Nuclear Information System (INIS)

    Pillard, Denise; Soule, Jean-Louis

    1964-12-01

    The Mother code (written in Fortran for IBM 7094) computes the integral cross section and the first two moments of energy transfer of a thermalizer. Computation organisation and methods are presented in an other document. This document presents code specifications, i.e. input data (for spectrum description, printing options, input record formats, conditions to be met by values), and results (printing formats and options, writing and punching options and formats)

  8. Compilation of the abstracts of nuclear computer codes available at CPD/IPEN

    International Nuclear Information System (INIS)

    Granzotto, A.; Gouveia, A.S. de; Lourencao, E.M.

    1981-06-01

    A compilation of all computer codes available at IPEN in S.Paulo are presented. These computer codes are classified according to Argonne National Laboratory - and Energy Nuclear Agency schedule. (E.G.) [pt

  9. CONTEMPT-DG containment analysis code

    International Nuclear Information System (INIS)

    Deem, R.E.; Rousseau, K.

    1982-01-01

    The assessment of hydrogen burning in a containment building during a degraded core event requires a knowledge of various system responses. These system responses (i.e. heat sinks, fan cooler units, sprays, etc.) can have a marked effect on the overall containment integrity results during a hydrogen burn. In an attempt to properly handle the various system responses and still retain the capability to perform sensitivity analysis on various parameters, the CONTEMPT-DG computer code was developed. This paper will address the historical development of the code, its various features, and the rationale for its development. Comparisons between results from the CONTEMPT-DG analyses and results from similar MARCH analyses will also be given

  10. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  11. Review and comparison of WWER and LWR Codes and Standards

    International Nuclear Information System (INIS)

    Buckthorpe, D.; Tashkinov, A.; Brynda, J.; Davies, L.M.; Cueto-Felgeueroso, C.; Detroux, P.; Bieniussa, K.; Guinovart, J.

    2003-01-01

    The results of work on a collaborative project on comparison of Codes and Standards used for safety related components of the WWER and LWR type reactors is presented. This work was performed on behalf of the European Commission, Working Group Codes and Standards and considers areas such as rules, criteria and provisions, failure mechanisms , derivation and understanding behind the fatigue curves, piping, materials and aging, manufacturing and ISI. WWERs are essentially designed and constructed using the Russian PNAE Code together with special provisions in a few countries (e.g. Czech Republic) from national standards. The LWR Codes have a strong dependence on the ASME Code. Also within Western Europe other codes are used including RCC-M, KTA and British Standards. A comparison of procedures used in all these codes and standards have been made to investigate the potential for equivalencies between the codes and any grounds for future cooperation between eastern and western experts in this field. (author)

  12. Shadowfax: Moving mesh hydrodynamical integration code

    Science.gov (United States)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  13. Controlando a atividade policial: uma análise comparada dos códigos de conduta no Brasil e Canadá Monitoring police activity: a comparative analysis of codes of ethics in Brazil and Canada

    Directory of Open Access Journals (Sweden)

    Arthur Trindade

    2011-08-01

    Full Text Available Neste texto discutimos os códigos de deontologia policial em uso em duas instituições policiais: a Polícia Militar do Distrito Federal (Brasil e o Ottawa Police Service (Canadá. Inicialmente, analisamos e comparamos os conteúdos destes documentos. A seguir, verificamos como cada uma destas duas instituições policiais articula seus sistemas de treinamento e avaliação com seus respectivos códigos de deontologia. Por fim, concluímos que a simples existência de códigos de deontologia, sem normas administrativas, não assegura o controle adequado das atividades policiais. Também constatamos a necessidade de assimilação destes códigos e nor-mas administrativas pelos sistemas de treinamento e avaliação das polícias.In this paper we discuss police deontology and codes of ethics used in two police institutions: the Military Police of the Federal District (Brazil and the Ottawa Police Service (Canada. First, we analyse and compare contents of these documents. Following, we examine how each of those police institutions coordinates its training and evaluation systems with its own deontological code. Finally, we find that the existence of deontolgy codes in itself, without administrative regulation, does not guarantee the appropriate control of police activities. Furthermore, we identified a need for assimilation of such codes and administrative regulations by the police training and evaluation systems.

  14. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  15. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  16. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  17. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  18. HOTSPOT Health Physics codes for the PC

    Energy Technology Data Exchange (ETDEWEB)

    Homann, S.G.

    1994-03-01

    The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).

  19. HOTSPOT Health Physics codes for the PC

    International Nuclear Information System (INIS)

    Homann, S.G.

    1994-03-01

    The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy's ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections)

  20. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  1. Hierarchical differences in population coding within auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2017-08-01

    Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation ( r noise ) between simultaneously recorded neurons and found that whereas engagement decreased average r noise in A1, engagement increased average r noise in ML. This finding surprised us, because attentive states are commonly reported to decrease average r noise We analyzed the effect of r noise on AM coding in both A1 and ML and found that whereas engagement-related shifts in r noise in A1 enhance AM coding, r noise shifts in ML have little effect. These results imply that the effect of r noise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing r noise Therefore, the hierarchical emergence of r noise -robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity. NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their

  2. Preemptive mobile code protection using spy agents

    OpenAIRE

    Kalogridis, Georgios

    2011-01-01

    This thesis introduces 'spy agents' as a new security paradigm for evaluating trust in remote hosts in mobile code scenarios. In this security paradigm, a spy agent, i.e. a mobile agent which circulates amongst a number of remote hosts, can employ a variety of techniques in order to both appear 'normal' and suggest to a malicious host that it can 'misuse' the agent's data or code without being held accountable. A framework for the operation and deployment of such spy agents is described. ...

  3. Pilotless Frame Synchronization Using LDPC Code Constraints

    Science.gov (United States)

    Jones, Christopher; Vissasenor, John

    2009-01-01

    A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.

  4. Overview of the ArbiTER edge plasma eigenvalue code

    Science.gov (United States)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  5. QR Code: An Interactive Mobile Advertising Tool

    Directory of Open Access Journals (Sweden)

    Ela Sibel Bayrak Meydanoglu

    2013-10-01

    Full Text Available Easy and rapid interaction between consumers and marketers enabled by mobile technology prompted  an increase in the usage of mobile media as an interactive marketing tool in recent years. One of the mobile technologies that can be used in interactive marketing for advertising is QR code (Quick Response Code. Interactive advertising brings back some advantages for the companies that apply it. For example, interaction with consumers provides significant information about consumers' preferences. Marketers can use information obtained from consumers for various marketing activities such as customizing advertisement messages, determining  target audience, improving future products and services. QR codes used in marketing campaigns can provide links to specific websites in which through various tools (e.g. questionnaires, voting information about the needs and wants of customers are collected. The aim of this basic research is to illustrate the contribution of  QR codes to the realization of the advantages gained by interactive advertising.

  6. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  7. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  8. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  9. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  10. Analysis, by Relap5 code, of boron dilution phenomena in a Small Break Loca Transient, performed in PKL III E 2.2 test

    International Nuclear Information System (INIS)

    Rizzo, G.; Vella, G.

    2007-01-01

    The present work is finalized to investigate the E2.2 thermal-hydraulics transient of the PKL III facility, which is a scaled reproduction of a typical German PWR, operated by FRAMATOME-ANP in Erlangen, Germany, within the framework of an international cooperation (OECD/SETH project). The main purpose of the project is to study boron dilution events in Pressurized Water Reactors and to contribute to the assessment of thermal-hydraulic system codes like Relap5. The experimental test PKL III E2.2 investigates the behavior of a typical PWR after a Small Break Loss Of Coolant Accident (SB-LOCA) in a cold leg and an immediate injection of borated water in two cold legs. The main purpose of this work is to simulate the PKL III test facility and particularly its experimental transient by Relap5 system code. The adopted nodalization, already available at Department of Nuclear Engineering (DIN), has been reviewed and applied with an accurate analysis of the experimental test parameters. The main result relies in a good agreement of calculated data with experimental measures for a number of main important variables. (author)

  11. Algorithms for coding scanned halftone pictures

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Forchhammer, Morten

    1988-01-01

    A method for coding scanned documents containing halftone pictures, e.g. newspapers and magazines, for transmission purposes is proposed. The halftone screen is estimated and the grey value of each dot is found, thus giving a compact description. At the receiver the picture is rescreened. A novel...

  12. Reliability of Calderbank-Shor-Steane codes and security of quantum key distribution

    International Nuclear Information System (INIS)

    Hamada, Mitsuru

    2004-01-01

    After Mayers (1996 Advances in Cryptography: Proc. Crypto'96 pp 343-57; 2001 J. Assoc. Comput. Mach. 48 351-406) gave a proof of the security of the Bennett-Brassard (1984 Proc. IEEE Int. Conf. on Computers, Systems and Signal Processing (Bangalore, India) pp 175-9) (BB84) quantum key distribution protocol, Shor and Preskill (2000 Phys. Rev. Lett. 85 441-4) made a remarkable observation that a Calderbank-Shor-Steane (CSS) code had been implicitly used in the BB84 protocol, and suggested its security could be proved by bounding the fidelity, say F n , of the incorporated CSS code of length n in the form 1-F n ≤ exp[-nE + o(n)] for some positive number E. This work presents such a number E = E(R) as a function of the rate of codes R, and a threshold R 0 such that E(R) > 0 whenever R 0 , which is larger than the achievable rate based on the Gilbert-Varshamov bound that is essentially given by Shor and Preskill. The codes in the present work are robust against fluctuations of channel parameters, which fact is needed to establish the security rigorously and was not proved for rates above the Gilbert-Varshamov rate before in the literature. As a byproduct, the security of a modified BB84 protocol against any joint (coherent) attacks is proved quantitatively

  13. Implementation and Performance Evaluation of Distributed Cloud Storage Solutions using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Fitzek, Frank; Toth, Tamas; Szabados, Áron

    2014-01-01

    This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...

  14. Comparação do código de ética médica do Brasil e de 11 países Comparison of the code of medical ethics of Brazil with those of eleven countries

    Directory of Open Access Journals (Sweden)

    Jayme Augusto Rocha Vianna

    2006-12-01

    Full Text Available OBJETIVO: Comparar o Código de Ética Médica do Conselho Federal de Medicina do Brasil com códigos de diferentes países com o objetivo de melhorar a compreensão da sua estrutura, contribuindo para o cumprimento de seus objetivos. MÉTODOS: Foram estudados 11 países dos cinco continentes: Argentina, Chile, Canadá, Estados Unidos, Portugal, Reino Unido, África do Sul, Egito, China, Índia e Austrália. As informações foram obtidas na internet, pelo acesso a sites de agências reguladoras e associações médicas. Os códigos foram descritos e comparados segundo informações sobre sua organização elaboradora, abrangência espacial, obrigatoriedade, data de elaboração, organização das orientações, e documentos auxiliares. RESULTADOS: Os códigos de ética médica estudados eram: 59% elaborados pela agência reguladora da medicina de seu país, 92% com abrangência nacional, 67% obrigatórios para todos os médicos e 73% tiveram sua última reelaboração após o ano 2000. Foi observada relação entre a organização elaboradora e a obrigatoriedade e abrangência espacial dos códigos. Foi evidenciada a necessidade de atualização sistemática dos códigos, o que freqüentemente é realizado por meio de documentos auxiliares, entretanto, pode haver dificuldade de conhecimento desse conteúdo. Foi observada a possibilidade de organizar as orientações por tópicos, na forma de pequenos textos para cada tema. CONCLUSÃO: Este estudo apresentou sugestões quanto ao Código de Ética Médica do Brasil: realizar uma revisão e atualização do código; organizar suas orientações de modo a incluir explicações e justificativas; e separar as resoluções de caráter ético, melhorando sua divulgação.OBJECTIVE: Compare the Code of Medical Ethics of the Federal Council of Medicine of Brazil with codes from 11 different countries, with the purpose of improving the comprehension of their structure and contribute to the achievement of

  15. The JAERI code system for evaluation of BWR ECCS performance

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Akimoto, Masayuki; Asahi, Yoshiro; Abe, Kiyoharu; Muramatsu, Ken; Araya, Fumimasa; Sato, Kazuo

    1982-12-01

    Development of respective computer code system of BWR and PWR for evaluation of ECCS has been conducted since 1973 considering the differences of the reactor cooling system, core structure and ECCS. The first version of the BWR code system, of which developmental work started earlier than that of the PWR, has been completed. The BWR code system is designed to provide computational tools to analyze all phases of LOCAs and to evaluate the performance of the ECCS including an ''Evaluation Model (EM)'' feature in compliance with the requirements of the current Japanese Evaluation Guideline of ECCS. The BWR code system could be used for licensing purpose, i.e. for ECCS performance evaluation or audit calculations to cross-examine the methods and results of applicants or vendors. The BWR code system presented in this report comprises several computer codes, each of which analyzes a particular phase of a LOCA or a system blowdown depending on a range of LOCAs, i.e. large and small breaks in a variety of locations in the reactor system. The system includes ALARM-B1, HYDY-B1 and THYDE-B1 for analysis of the system blowdown for various break sizes, THYDE-B-REFLOOD for analysis of the reflood phase and SCORCH-B2 for the calculation of the fuel assembl hot plane temperature. When the multiple codes are used to analyze a broad range of LOCA as stated above, it is very important to evaluate the adequacy and consistency between the codes used to cover an entire break spectrum. The system consistency together with the system performance are discussed for a large commercial BWR. (author)

  16. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Computer codes for level 1 probabilistic safety assessment

    International Nuclear Information System (INIS)

    1990-06-01

    Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs

  18. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  19. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  20. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  1. Current and anticipated uses of thermal-hydraulic codes in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  2. Current and anticipated uses of thermal-hydraulic codes in Germany

    International Nuclear Information System (INIS)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-01-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses

  3. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  6. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  7. Citham-2 computer code-User manual

    International Nuclear Information System (INIS)

    Batista, J.L.

    1984-01-01

    The procedures and the input data for the Citham-2 computer code are described. It is a subroutine that modifies the nuclide concentration taking in account its burn and prepares cross sections library in 2,3 or 4 energy groups, to the used for Citation program. (E.G.) [pt

  8. System code improvements for modelling passive safety systems and their validation

    Energy Technology Data Exchange (ETDEWEB)

    Buchholz, Sebastian; Cron, Daniel von der; Schaffrath, Andreas [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    GRS has been developing the system code ATHLET over many years. Because ATHLET, among other codes, is widely used in nuclear licensing and supervisory procedures, it has to represent the current state of science and technology. New reactor concepts such as Generation III+ and IV reactors and SMR are using passive safety systems intensively. The simulation of passive safety systems with the GRS system code ATHLET is still a big challenge, because of non-defined operation points and self-setting operation conditions. Additionally, the driving forces of passive safety systems are smaller and uncertainties of parameters have a larger impact than for active systems. This paper addresses the code validation and qualification work of ATHLET on the example of slightly inclined horizontal heat exchangers, which are e. g. used as emergency condensers (e. g. in the KERENA and the CAREM) or as heat exchanger in the passive auxiliary feed water systems (PAFS) of the APR+.

  9. Medical reliable network using concatenated channel codes through GSM network.

    Science.gov (United States)

    Ahmed, Emtithal; Kohno, Ryuji

    2013-01-01

    Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.

  10. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  11. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  12. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  13. Status of the ASTEC integral code

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)

  14. Robust Self-Authenticating Network Coding

    Science.gov (United States)

    2008-11-30

    efficient as traditional point-to-point coding schemes 3m*b*c*ts»tt a«2b»c*dt4g »4.0»C* 3d *Sh Number of symbols that an intermediate node has to...Institute of Technology This work was partly supported by the Fundacao para a Ciencia e Tecnologia (Portuguese foundation lor Science and Technology

  15. Locations of serial reach targets are coded in multiple reference frames.

    Science.gov (United States)

    Thompson, Aidan A; Henriques, Denise Y P

    2010-12-01

    Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5°. But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by

  16. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  17. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  18. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  19. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  20. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  1. The Code of the Street and Violent Versus Property Crime Victimization.

    Science.gov (United States)

    McNeeley, Susan; Wilcox, Pamela

    2015-01-01

    Previous research has shown that individuals who adopt values in line with the code of the street are more likely to experience violent victimization (e.g., Stewart, Schreck, & Simons, 2006). This study extends this literature by examining the relationship between the street code and multiple types of violent and property victimization. This research investigates the relationship between street code-related values and 4 types of victimization (assault, breaking and entering, theft, and vandalism) using Poisson-based multilevel regression models. Belief in the street code was associated with higher risk of experiencing assault, breaking and entering, and vandalism, whereas theft victimization was not related to the street code. The results suggest that the code of the street influences victimization broadly--beyond violence--by increasing behavior that provokes retaliation from others in various forms.

  2. Interface code between WIMS-AECL and RFSP-IST for coupling computing

    International Nuclear Information System (INIS)

    Xu Liangwang; Liu Yu; Jia Baoshan

    2007-01-01

    A code based on the protocols of Telnet and FTP is developed with C++ for coupling computing between WIMS-AECL and RFSP-IST. the input document of WIMS-AECL and RFSP-ISP cna be generated automatically and be submitted to server, the output document will be downloaded by the end of computing. the function of analyzing standard output document is also included in this code. After simple updating, this code can meet the requirement of other code using input document, e.g. CATHENA. A pilot study of the relation between void fraction and reactivity in TACR, some valuable conclusions has been achieved. (authors)

  3. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  4. Use of GOTHIC Code for Assessment of Equipment Environmental Qualification

    International Nuclear Information System (INIS)

    Cavlina, N.; Feretic, D.; Grgic, D.; Spalj, S.; Spiler, J.

    1996-01-01

    Environmental qualification (EQ) of equipment important to safety in nuclear power plants ensures its capability to perform designated safety function on demand under postulated service conditions, including harsh accident environment (e. g. LOCA, HELB). The computer code GOTHIC was used to calculate pressure and temperature profiles inside NPP Krsko containment during limiting LOCA and MSLB accidents. The results of the new best-estimate containment code are compared to the older CONTEMPT code using the same input data and assumptions. The predictions obtained by both codes are very similar. As a result of the calculation the envelopes of the LOCA and MSLB pressures and temperatures, as used in FSAR/USAR Chapter 6, can be used in EQ project. (author)

  5. Development of steam explosion simulation code JASMINE

    Energy Technology Data Exchange (ETDEWEB)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu; Kudo, Tamotsu; Sugimoto, Jun [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Nagano, Katsuhiro; Araki, Kazuhiro

    1995-11-01

    A steam explosion is considered as a phenomenon which possibly threatens the integrity of the containment vessel of a nuclear power plant in a severe accident condition. A numerical calculation code JASMINE (JAeri Simulator for Multiphase INteraction and Explosion) purposed to simulate the whole process of steam explosions has been developed. The premixing model is based on a multiphase flow simulation code MISTRAL by Fuji Research Institute Co. In JASMINE code, the constitutive equations and the flow regime map are modified for the simulation of premixing related phenomena. The numerical solution method of the original code is succeeded, i.e. the basic equations are discretized semi-implicitly, BCGSTAB method is used for the matrix solver to improve the stability and convergence, also TVD scheme is applied to capture a steep phase distribution accurately. Test calculations have been performed for the conditions correspond to the experiments by Gilbertson et al. and Angelini et al. in which mixing of solid particles and water were observed in iso-thermal condition and with boiling, respectively. (author).

  6. Development of steam explosion simulation code JASMINE

    International Nuclear Information System (INIS)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu; Kudo, Tamotsu; Sugimoto, Jun; Nagano, Katsuhiro; Araki, Kazuhiro.

    1995-11-01

    A steam explosion is considered as a phenomenon which possibly threatens the integrity of the containment vessel of a nuclear power plant in a severe accident condition. A numerical calculation code JASMINE (JAeri Simulator for Multiphase INteraction and Explosion) purposed to simulate the whole process of steam explosions has been developed. The premixing model is based on a multiphase flow simulation code MISTRAL by Fuji Research Institute Co. In JASMINE code, the constitutive equations and the flow regime map are modified for the simulation of premixing related phenomena. The numerical solution method of the original code is succeeded, i.e. the basic equations are discretized semi-implicitly, BCGSTAB method is used for the matrix solver to improve the stability and convergence, also TVD scheme is applied to capture a steep phase distribution accurately. Test calculations have been performed for the conditions correspond to the experiments by Gilbertson et al. and Angelini et al. in which mixing of solid particles and water were observed in iso-thermal condition and with boiling, respectively. (author)

  7. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  8. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  9. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  10. THYDE-P2 code: RCS (reactor-coolant system) analysis code

    International Nuclear Information System (INIS)

    Asahi, Yoshiro; Hirano, Masashi; Sato, Kazuo

    1986-12-01

    THYDE-P2, being characterized by the new thermal-hydraulic network model, is applicable to analysis of RCS behaviors in response to various disturbances including LB (large break)-LOCA(loss-of-coolant accident). In LB-LOCA analysis, THYDE-P2 is capable of through calculation from its initiation to complete reflooding of the core without an artificial change in the methods and models. The first half of the report is the description of the methods and models for use in the THYDE-P2 code, i.e., (1) the thermal-hydraulic network model, (2) the various RCS components models, (3) the heat sources in fuel, (4) the heat transfer correlations, (5) the mechanical behavior of clad and fuel, and (6) the steady state adjustment. The second half of the report is the user's mannual for the THYDE-P2 code (version SV04L08A) containing items; (1) the program control (2) the input requirements, (3) the execution of THYDE-P2 job, (4) the output specifications and (5) the sample problem to demonstrate capability of the thermal-hydraulic network model, among other things. (author)

  11. Toetsing van fiscaal beleid ten aanzien van taxplanning in Code Tabaksblat

    NARCIS (Netherlands)

    Enden, van der E.

    2006-01-01

    De Code Tabaksblat stelt in paragraaf III.5.4 e) dat de auditcommissie ‘het beleid van de vennootschap met betrekking tot taxplanning’ toetst. De Code gaat verder niet in op het begrip ‘taxplanning’. Het lijkt logisch dat hieronder niet alleen ‘tax’ in de zin van IAS 12 wordt verstaan, belastingen

  12. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  13. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  14. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  15. Increasing Trend of Fatal Falls in Older Adults in the United States, 1992 to 2005: Coding Practice or Reporting Quality?

    Science.gov (United States)

    Kharrazi, Rebekah J; Nash, Denis; Mielenz, Thelma J

    2015-09-01

    To investigate whether changes in death certificate coding and reporting practices explain part or all of the recent increase in the rate of fatal falls in adults aged 65 and older in the United States. Trends in coding and reporting practices of fatal falls were evaluated under mortality coding schemes for International Classification of Diseases (ICD), Ninth Revision (1992-1998) and Tenth Revision (1999-2005). United States, 1992 to 2005. Individuals aged 65 and older with falls listed as the underlying cause of death (UCD) on their death certificates. The primary outcome was annual fatal falls rates per 100,000 U.S. residents aged 65 and older. Coding practice was assessed through analysis of trends in rates of specific UCD fall ICD e-codes over time. Reporting quality was assessed by examining changes in the location on the death certificate where fall e-codes were reported, in particular, the percentage of fall e-codes recorded in the proper location on the death certificate. Fatal falls rates increased over both time periods: 1992 to 1998 and 1999 to 2005. A single falls e-code was responsible for the increasing trend of fatal falls overall from 1992 to 1998 (E888, other and unspecified fall) and from 1999 to 2005 (W18, other falls on the same level), whereas trends for other falls e-codes remained stable. Reporting quality improved steadily throughout the study period. Better reporting quality, not coding practices, contributed to the increasing rate of fatal falls in older adults in the United States from 1992 to 2005. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  16. R.E.S.E.X. A computer simulation program for rare earth separation processes

    International Nuclear Information System (INIS)

    Casarci, M.; Gasparini, G.M.; Sanfilippo, L; Pozio, A.

    1996-01-01

    Lanthanides are most commonly separated using complex solvent extraction circuits. A simulation code has been developed by E.N.E.A. called R.E.S.E.X. (Rare Earth Solvent Extraction) which is able to simulate a solvent extraction battery up to 200 stages, using different configurations. The contemporary utilisation of an equilibrium data bank and of a simulation code allows the theoretical study of new rare earth separative processes or the optimisation of existing ones. As an example of this strategy are reported the results of the Pr/Nd separation in TBP 50 % in aromatic solvent

  17. Phenomenological optical potentials and optical model computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    An introduction to the Optical Model is presented. Starting with the purpose and nature of the physical problems to be analyzed, a general formulation and the various phenomenological methods of solution are discussed. This includes the calculation of observables based on assumed potentials such as local and non-local and their forms, e.g. Woods-Saxon, folded model etc. Also discussed are the various calculational methods and model codes employed to describe nuclear reactions in the spherical and deformed regions (e.g. coupled-channel analysis). An examination of the numerical solutions and minimization techniques associated with the various codes, is briefly touched upon. Several computer programs are described for carrying out the calculations. The preparation of input, (formats and options), determination of model parameters and analysis of output are described. The class is given a series of problems to carry out using the available computer. Interpretation and evaluation of the samples includes the effect of varying parameters, and comparison of calculations with the experimental data. Also included is an intercomparison of the results from the various model codes, along with their advantages and limitations. (author)

  18. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  19. Special issue on network coding

    Science.gov (United States)

    Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly

    2017-12-01

    Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.

  20. B2-B2.5 code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Dekeyser, W.; Baelmans, M; Voskoboynikov, S.; Rozhansky, V.; Reiter, D.; Wiesen, S.; Kotov, V.; Boerner, P.

    2011-01-15

    ITER-IO currently (and since about 15 years) employs the SOLPS4.xxx code for its divertor design, currently version SOLPS4.3. SOLPS.xxx is a special variant of the B2-EIRENE code, which was originally developed by an European consortium (FZ Juelich, AEA Culham, ERM Belgium/KU Leuven) in the late eighties and early nineties of the last century under NET contracts. Until today even the very similar edge plasma codes within the SOLPS family, if run on a seemingly identical choice of physical parameters, still sometimes disagree significantly with each other. It is obvious that in computational engineering applications, as they are carried out for the various ITER divertor aspects with SOLPS4.3 for more than a decade now, any transition from one to another code must be fully backward compatible, or, at least, the origin of differences in the results must be identified and fully understood quantitatively. In this report we document efforts undertaken in 2010 to ultimately eliminate the third issue. For the kinetic EIRENE part within SOLPS this backward compatibility (back until 1996) was basically achieved (V. Kotov, 2004-2006) and SOLPS4.3 is now essentially up to date with the current EIRENE master maintained at FZ Juelich. In order to achieve a similar level of reproducibility for the plasma fluid (B2, B2.5) part, we follow a similar strategy, which is quite distinct from the previous SOLPS benchmark attempts: the codes are ''disintegrated'' and pieces of it are run on smallest (i.e. simplest) problems. Only after full quantitative understanding is achieved, the code model is enlarged, integrated, piece by piece again, until, hopefully, a fully backward compatible B2 / B2.5 ITER edge plasma simulation will be achieved. The status of this code dis-integration effort and its findings until now (Nov. 2010) are documented in the present technical note. This work was initiated in a small workshop by the three partner teams of KU Leuven, St. Petersburg

  1. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  2. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  3. Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic

    Directory of Open Access Journals (Sweden)

    Hai Zhu

    2018-01-01

    Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.

  4. Verification study of the FORE-2M nuclear/thermal-hydraulilc analysis computer code

    International Nuclear Information System (INIS)

    Coffield, R.D.; Tang, Y.S.; Markley, R.A.

    1982-01-01

    The verification of the LMFBR core transient performance code, FORE-2M, was performed in two steps. Different components of the computation (individual models) were verified by comparing with analytical solutions and with results obtained from other conventionally accepted computer codes (e.g., TRUMP, LIFE, etc.). For verification of the integral computation method of the code, experimental data in TREAT, SEFOR and natural circulation experiments in EBR-II were compared with the code calculations. Good agreement was obtained for both of these steps. Confirmation of the code verification for undercooling transients is provided by comparisons with the recent FFTF natural circulation experiments. (orig.)

  5. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  6. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    Science.gov (United States)

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  7. Comparative study of Thermal Hydraulic Analysis Codes for Pressurized Water Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yang Hoon; Jang, Mi Suk; Han, Kee Soo [Nuclear Engineering Service and Solution Co. Ltd., Daejeon (Korea, Republic of)

    2015-05-15

    Various codes are used for the thermal hydraulic analysis of nuclear reactors. The use of some codes among these is limited by user and some codes are not even open to general person. Thus, the use of alternative code is considered for some analysis. In this study, simple thermal hydraulic behaviors are analyzed using three codes to show that alternative codes are possible for the analysis of nuclear reactors. We established three models of the simple u-tube manometer using three different codes. RELAP5 (Reactor Excursion and Leak Analysis Program), SPACE (Safety and Performance Analysis CodE for nuclear power Plants), GOTHIC (Generation of Thermal Hydraulic Information for Containments) are selected for this analysis. RELAP5 is widely used codes for the analysis of system behavior of PWRs. SPACE has been developed based on RELAP5 for the analysis of system behavior of PWRs and licensing of the code is in progress. And GOTHIC code also has been widely used for the analysis of thermal hydraulic behavior in the containment system. The internal behavior of u-tube manometer was analyzed by RELAP5, SPACE and GOTHIC codes. The general transient behavior was similar among 3 codes. However, the stabilized status of the transient period analyzed by REPAP5 was different from the other codes. It would be resulted from the different physical models used in the other codes, which is specialized for the multi-phase thermal hydraulic behavior analysis.

  8. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  9. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  10. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  11. Over 10 dB Net Coding Gain Based on 20% Overhead Hard Decision Forward Error Correction in 100G Optical Communication Systems

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Zibar, Darko

    2011-01-01

    We propose a product code with shortened BCH component codes for 100G optical communication systems. Simulation result shows that 10 dB net coding gain is promising at post- FEC BER of 1E-15.......We propose a product code with shortened BCH component codes for 100G optical communication systems. Simulation result shows that 10 dB net coding gain is promising at post- FEC BER of 1E-15....

  12. Validation of the reactor dynamics code TRAB

    International Nuclear Information System (INIS)

    Raety, H.; Kyrki-Rajamaeki, R.; Rajamaeki, M.

    1991-05-01

    The one-dimensional reactor dynamics code TRAB (Transient Analysis code for BWRs) developed at VTT was originally designed for BWR analyses, but it can in its present version be used for various modelling purposes. The core model of TRAB can be used separately for LWR calculations. For PWR modelling the core model of TRAB has been coupled to circuit model SMABRE to form the SMATRA code. The versatile modelling capabilities of TRAB have been utilized also in analyses of e.g. the heating reactor SECURE and the RBMK-type reactor (Chernobyl). The report summarizes the extensive validation of TRAB. TRAB has been validated with benchmark problems, comparative calculations against independent analyses, analyses of start-up experiments of nuclear power plants and real plant transients. Comparative RBMES type reactor calculations have been made against Soviet simulations and the initial power excursion of the Chernobyl reactor accident has also been calculated with TRAB

  13. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  14. Development of a general coupling interface for the fuel performance code TRANSURANUS – Tested with the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.; Macián-Juan, R.

    2015-01-01

    Highlights: • A general coupling interface was developed for couplings of the TRANSURANUS code. • With this new tool simplified fuel behavior models in codes can be replaced. • Applicable e.g. for several reactor types and from normal operation up to DBA. • The general coupling interface was applied to the reactor dynamics code DYN3D. • The new coupled code system DYN3D–TRANSURANUS was successfully tested for RIA. - Abstract: A general interface is presented for coupling the TRANSURANUS fuel performance code with thermal hydraulics system, sub-channel thermal hydraulics, computational fluid dynamics (CFD) or reactor dynamics codes. As first application the reactor dynamics code DYN3D was coupled at assembly level in order to describe the fuel behavior in more detail. In the coupling, DYN3D provides process time, time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, which in case of the two-way coupling approach transfers parameters like fuel temperature and cladding temperature back to DYN3D. Results of the coupled code system are presented for the reactivity transient scenario, initiated by control rod ejection. More precisely, the two-way coupling approach systematically calculates higher maximum values for the node fuel enthalpy. These differences can be explained thanks to the greater detail in fuel behavior modeling. The numerical performance for DYN3D–TRANSURANUS was proved to be fast and stable. The coupled code system can therefore improve the assessment of safety criteria, at a reasonable computational cost

  15. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  16. Theoretical Atomic Physics code development II: ACE: Another collisional excitation code

    International Nuclear Information System (INIS)

    Clark, R.E.H.; Abdallah, J. Jr.; Csanak, G.; Mann, J.B.; Cowan, R.D.

    1988-12-01

    A new computer code for calculating collisional excitation data (collision strengths or cross sections) using a variety of models is described. The code uses data generated by the Cowan Atomic Structure code or CATS for the atomic structure. Collisional data are placed on a random access file and can be displayed in a variety of formats using the Theoretical Atomic Physics Code or TAPS. All of these codes are part of the Theoretical Atomic Physics code development effort at Los Alamos. 15 refs., 10 figs., 1 tab

  17. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  18. Strong normalization by type-directed partial evaluation and run-time code generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1998-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  19. Strong Normalization by Type-Directed Partial Evaluation and Run-Time Code Generation

    DEFF Research Database (Denmark)

    Balat, Vincent; Danvy, Olivier

    1997-01-01

    We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....

  20. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  1. BAR-MOM code and its application

    International Nuclear Information System (INIS)

    Wang Shunuan

    2002-01-01

    BAR-MOM code for calculating the height of the fission barrier Bf , the energy of the ground state is presented; the compound nucleus stability by limit with respect to fission, i.e., the angular momentum (the spin value) L max at which the fission barrier disappears, the three principal axis moments of inertia at saddle point for a certain nucleus with atomic number Z, atomic mass number A and angular momentum L in units of ℎ for 19< Z<102, and the model used are introduced briefly. The generalized BAR-MOM code to include the results for Z ≥ 102 by using more recent parameterization of the Thomas Fermi fission barrier is also introduced briefly. We have learned the models used in Code BAR-MOM, and run it successfully and correctly for a certain nucleus with atomic mass number A, atomic number Z, and angular momentum L on PC by Fortran-90. The testing calculation values to check the implementation of the program show that the results of the present work are in good agreement with the original one

  2. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  3. TACO: a finite element heat transfer code

    International Nuclear Information System (INIS)

    Mason, W.E. Jr.

    1980-02-01

    TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code

  4. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  5. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  6. Development of a new EMP code at LANL

    Science.gov (United States)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  7. Poisson/Superfish codes for personal computers

    International Nuclear Information System (INIS)

    Humphries, S.

    1992-01-01

    The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs

  8. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  9. Towards high dynamic range extensions of HEVC: subjective evaluation of potential coding technologies

    Science.gov (United States)

    Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj

    2015-09-01

    This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.

  10. CodeArmor : Virtualizing the Code Space to Counter Disclosure Attacks

    NARCIS (Netherlands)

    Chen, Xi; Bos, Herbert; Giuffrida, Cristiano

    2017-01-01

    Code diversification is an effective strategy to prevent modern code-reuse exploits. Unfortunately, diversification techniques are inherently vulnerable to information disclosure. Recent diversification-aware ROP exploits have demonstrated that code disclosure attacks are a realistic threat, with an

  11. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  12. An Evaluation of Automated Code Generation with the PetriCode Approach

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....

  13. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  14. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  15. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    Science.gov (United States)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  16. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  17. Risk-informed appendices G and E for section XI of the ASME Boiler and Pressure Vessel Code

    International Nuclear Information System (INIS)

    Carter, B; Spanner, J.; Server, W.; Gamble, R.; Bishop, B.; Palm, N.; Heinecke, C.

    2011-01-01

    Full text of publication follows: The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI, contains two appendices (G and E) related to reactor pressure boundary integrity. Appendix G provides procedures for defining Service Level A and B pressure temperature limits for ferritic components in the reactor coolant pressure boundary. Recently, an alternative risk informed methodology has been developed for ASME Section XI, Appendix G. The alternative methodology provides simple procedures to define risk informed pressure temperature limits for Service Level A and B events, including leak testing and reactor start up and shut down for both pressurized water reactors (PWRs) and boiling water reactors (BWRs). Risk informed pressure temperature limits provide more operational flexibility, particularly for reactor pressure vessels (RPV) with relatively high irradiation levels and radiation sensitive materials. Appendix E of Section XI provides a methodology for assessing conditions when the Appendix G limits are exceeded. A similar risk informed methodology is being considered for Appendix E. The probabilistic fracture mechanics evaluations used to develop the risk informed relationships included appropriate material properties for the range of RPV materials in operating plants in the United States and operating history and system operational constraints in both BWRs and PWRs. The analysis results were used to define pressure temperature relationships that provide an acceptable level of risk, consistent with safety goals defined by the U.S. Nuclear Regulatory Commission. The alternative methodologies for Appendices G and E will provide greater operational flexibility, especially for Service Level A and B events that may adversely affect efficient and safe plant operation, such as low temperature over pressurization for PWRs and BWR leak testing. Overall, application of the risk informed appendices can result in increased plant

  18. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    Science.gov (United States)

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p code (odds ratio 310.0, p coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  19. Comprehensive Report For Proposed Elevated Temperature Elastic Perfectly Plastic (EPP) Code Cases Representative Example Problems

    Energy Technology Data Exchange (ETDEWEB)

    Hollinger, Greg L. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-06-01

    Background: The current rules in the nuclear section of the ASME Boiler and Pressure Vessel (B&PV) Code , Section III, Subsection NH for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 1200F (650C)1. To address this issue, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (E-PP) analysis methods and which are expected to be applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature 2, 3, and have been recently revised to incorporate comments and simplify their application. The revised code cases have been developed. Task Objectives: The goal of the Sample Problem task is to exercise these code cases through example problems to demonstrate their feasibility and, also, to identify potential corrections and improvements should problems be encountered. This will provide input to the development of technical background documents for consideration by the applicable B&PV committees considering these code cases for approval. This task has been performed by Hollinger and Pease of Becht Engineering Co., Inc., Nuclear Services Division and a report detailing the results of the E-PP analyses conducted on example problems per the procedures of the E-PP strain limits and creep-fatigue draft code cases is enclosed as Enclosure 1. Conclusions: The feasibility of the application of the E-PP code cases has been demonstrated through example problems that consist of realistic geometry (a nozzle attached to a semi-hemispheric shell with a circumferential weld) and load (pressure; pipe reaction load applied at the end of the nozzle, including axial and shear forces, bending and torsional moments; through-wall transient temperature gradient) and design and operating conditions (Levels A, B and C).

  20. Use of the Coding Causes of Death in HIV in the classification of deaths in Northeastern Brazil.

    Science.gov (United States)

    Alves, Diana Neves; Bresani-Salvi, Cristiane Campello; Batista, Joanna d'Arc Lyra; Ximenes, Ricardo Arraes de Alencar; Miranda-Filho, Demócrito de Barros; Melo, Heloísa Ramos Lacerda de; Albuquerque, Maria de Fátima Pessoa Militão de

    2017-01-01

    Describe the coding process of death causes for people living with HIV/AIDS, and classify deaths as related or unrelated to immunodeficiency by applying the Coding Causes of Death in HIV (CoDe) system. A cross-sectional study that codifies and classifies the causes of deaths occurring in a cohort of 2,372 people living with HIV/AIDS, monitored between 2007 and 2012, in two specialized HIV care services in Pernambuco. The causes of death already codified according to the International Classification of Diseases were recoded and classified as deaths related and unrelated to immunodeficiency by the CoDe system. We calculated the frequencies of the CoDe codes for the causes of death in each classification category. There were 315 (13%) deaths during the study period; 93 (30%) were caused by an AIDS-defining illness on the Centers for Disease Control and Prevention list. A total of 232 deaths (74%) were related to immunodeficiency after application of the CoDe. Infections were the most common cause, both related (76%) and unrelated (47%) to immunodeficiency, followed by malignancies (5%) in the first group and external causes (16%), malignancies (12 %) and cardiovascular diseases (11%) in the second group. Tuberculosis comprised 70% of the immunodeficiency-defining infections. Opportunistic infections and aging diseases were the most frequent causes of death, adding multiple disease burdens on health services. The CoDe system increases the probability of classifying deaths more accurately in people living with HIV/AIDS. Descrever o processo de codificação das causas de morte em pessoas vivendo com HIV/Aids, e classificar os óbitos como relacionados ou não relacionados à imunodeficiência aplicando o sistema Coding Causes of Death in HIV (CoDe). Estudo transversal, que codifica e classifica as causas dos óbitos ocorridos em uma coorte de 2.372 pessoas vivendo com HIV/Aids acompanhadas entre 2007 e 2012 em dois serviços de atendimento especializado em HIV em

  1. ARTEMIS: The core simulator of AREVA NP's next generation coupled neutronics/thermal-hydraulics code system ARCADIAR

    International Nuclear Information System (INIS)

    Hobson, Greg; Merk, Stephan; Bolloni, Hans-Wilhelm; Breith, Karl-Albert; Curca-Tivig, Florin; Van Geemert, Rene; Heinecke, Jochen; Hartmann, Bettina; Porsch, Dieter; Tiles, Viatcheslav; Dall'Osso, Aldo; Pothet, Baptiste

    2008-01-01

    AREVA NP has developed a next-generation coupled neutronics/thermal-hydraulics code system, ARCADIA R , to fulfil customer's current demands and even anticipate their future demands in terms of accuracy and performance. The new code system will be implemented world-wide and will replace several code systems currently used in various global regions. An extensive phase of verification and validation of the new code system is currently in progress. One of the principal components of this new system is the core simulator, ARTEMIS. Besides the stand-alone tests on the individual computational modules, integrated tests on the overall code are being performed in order to check for non-regression as well as for verification of the code. Several benchmark problems have been successfully calculated. Full-core depletion cycles of different plant types from AREVA's French, American and German regions (e.g. N4 and KONVOI types) have been performed with ARTEMIS (using APOLLO2-A cross sections) and compared directly with current production codes, e.g. with SCIENCE and CASCADE-3D, and additionally with measurements. (authors)

  2. Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.

    Science.gov (United States)

    Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger

    2015-01-01

    To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  4. The FLIC conversion codes

    Energy Technology Data Exchange (ETDEWEB)

    Basher, J C [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1965-05-15

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  5. The FLIC conversion codes

    International Nuclear Information System (INIS)

    Basher, J.C.

    1965-05-01

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  6. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  7. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  8. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    Science.gov (United States)

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  9. Premar-2: a Monte Carlo code for radiative transport simulation in atmospheric environments

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E. [ENEA, Centro Ricerche Ezio Clementel, Bologna, (Italy). Dipt. Innovazione

    1999-07-01

    The peculiarities of the PREMAR-2 code, aimed at radiation transport Monte Carlo simulation in atmospheric environments in the infrared-ultraviolet frequency range, are described. With respect to the previously developed PREMAR code, besides plane multilayers, spherical multilayers and finite sequences of vertical layers, each one with its own atmospheric behaviour, are foreseen in the new code, together with the refraction phenomenon, so that long range, highly slanted paths can now be more faithfully taken into account. A zenithal angular dependence of the albedo coefficient has moreover been introduced. Lidar systems, with spatially independent source and telescope, are allowed again to be simulated, and, in this latest version of the code, sensitivity analyses to be performed. According to this last feasibility, consequences on radiation transport of small perturbations in physical components of the atmospheric environment may be analyze and the related effects on searched results estimated. The availability of a library of physical data (reaction coefficients, phase functions and refraction indexes) is required by the code, providing the essential features of the environment of interest needed of the Monte Carlo simulation. Variance reducing techniques have been enhanced in the Premar-2 code, by introducing, for instance, a local forced collision technique, especially apt to be used in Lidar system simulations. Encouraging comparisons between code and experimental results carried out at the Brasimone Centre of ENEA, have so far been obtained, even if further checks of the code are to be performed. [Italian] Nel presente rapporto vengono descritte le principali caratteristiche del codice di calcolo PREMAR-2, che esegue la simulazione Montecarlo del trasporto della radiazione elettromagnetica nell'atmosfera, nell'intervallo di frequenza che va dall'infrarosso all'ultravioletto. Rispetto al codice PREMAR precedentemente sviluppato, il codice

  10. Development of 'SKYSHINE-CG' code. A line-beam method code equipped with combinatorial geometry routine

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, Takahiro; Ochiai, Katsuharu [Plant and System Planning Department, Toshiba Corporation, Yokohama, Kanagawa (Japan); Uematsu, Mikio; Hayashida, Yoshihisa [Department of Nuclear Engineering, Toshiba Engineering Corporation, Yokohama, Kanagawa (Japan)

    2000-03-01

    A boiling water reactor (BWR) plant has a single loop coolant system, in which main steam generated in the reactor core proceeds directly into turbines. Consequently, radioactive {sup 16}N (6.2 MeV photon emitter) contained in the steam contributes to gamma-ray skyshine dose in the vicinity of the BWR plant. The skyshine dose analysis is generally performed with the line-beam method code SKYSHINE, in which calculational geometry consists of a rectangular turbine building and a set of isotropic point sources corresponding to an actual distribution of {sup 16}N sources. For the purpose of upgrading calculational accuracy, the SKYSHINE-CG code has been developed by incorporating the combinatorial geometry (CG) routine into the SKYSHINE code, so that shielding effect of in-building equipment can be properly considered using a three-dimensional model composed of boxes, cylinders, spheres, etc. Skyshine dose rate around a 500 MWe BWR plant was calculated with both SKYSHINE and SKYSHINE-CG codes, and the calculated results were compared with measured data obtained with a NaI(Tl) scintillation detector. The C/E values for SKYSHINE-CG calculation were scattered around 4.0, whereas the ones for SKYSHINE calculation were as large as 6.0. Calculational error was found to be reduced by adopting three-dimensional model based on the combinatorial geometry method. (author)

  11. Low Complexity List Decoding for Polar Codes with Multiple CRC Codes

    Directory of Open Access Journals (Sweden)

    Jong-Hwan Kim

    2017-04-01

    Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.

  12. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  13. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  14. Vectorization, parallelization and porting of nuclear codes (porting). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Nemoto, Toshiyuki; Kawai, Wataru; Ishizuki, Shigeru; Kawasaki, Nobuo; Kume, Etsuo; Adachi, Masaaki; Ogasawara, Shinobu

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the porting. In this porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. In the vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics Ntv Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model/multi-group model) MVP/GMVP on the Paragon are described. (author)

  15. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  16. Vectorization, parallelization and porting of nuclear codes (porting). Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, Nobuo; Nemoto, Toshiyuki; Kawai, Wataru; Ishizuki, Shigeru [Fujitsu Ltd., Tokyo (Japan); Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-01-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization port on scalar processors and the porting part. In this report, we describe the porting. In this porting part, the porting of Assisted Model Building with Energy Refinement code version 5 (AMBER5), general purpose Monte Carlo codes far neutron and photon transport calculations based on continuous energy and multigroup methods (MVP/GMVP), automatic editing system for MCNP library code (autonj), neutron damage calculations for materials irradiations and neutron damage calculations for compounds code (SPECTER/SPECOMP), severe accident analysis code (MELCOR) and COolant Boiling in Rod Arrays, Two-Fluid code (COBRA-TF) on the VPP500 system and/or the AP3000 system are described. (author)

  17. The correlation between diversion (Article 113 of the Criminal Code of Ukraine and terrorist act (Article 258 of the Criminal Code of Ukraine

    Directory of Open Access Journals (Sweden)

    Андрій Сергійович Климосюк

    2018-03-01

    During the investigating of the punishability for these crimes, it was found that in some cases the actual infliction of harm by a s diversion causes the necessity for additional qualifications by Part 2 of Art. 115 or Part 3 of Art. 258 of the Criminal Code of Ukraine. It is proved that the norm of diversion can be competed with the norm of a terrorist act as a whole (Article 113 of the Criminal Code of Ukraine and as part of the whole (Article 258 of the Criminal Code of Ukraine, and in such cases the preference in enforcement should be qualified as a diversion. Examples given in this article are an illustrations of an ideal and actual set of diversion e and terrorist act.

  18. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  19. Citham a computer code for calculating fuel depletion-description, tests, modifications and evaluation

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1984-12-01

    The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author) [pt

  20. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  1. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  2. Importance of external cause coding for injury surveillance: lessons from assessment of overexertion injuries among U.S. Army soldiers in 2014.

    Science.gov (United States)

    Canham-Chervak, Michelle; Steelman, Ryan A; Schuh, Anna; Jones, Bruce H

    2016-11-01

    Injuries are a barrier to military medical readiness, and overexertion has historically been a leading mechanism of injury among active duty U.S. Army soldiers. Details are needed to inform prevention planning. The Defense Medical Surveillance System (DMSS) was queried for unique medical encounters among active duty Army soldiers consistent with the military injury definition and assigned an overexertion external cause code (ICD-9: E927.0-E927.9) in 2014 (n=21,891). Most (99.7%) were outpatient visits and 60% were attributed specifically to sudden strenuous movement. Among the 41% (n=9,061) of visits with an activity code (ICD-9: E001-E030), running was the most common activity (n=2,891, 32%); among the 19% (n=4,190) with a place of occurrence code (ICD-9: E849.0-E849.9), the leading location was recreation/sports facilities (n=1,332, 32%). External cause codes provide essential details, but the data represented less than 4% of all injury-related medical encounters among U.S. Army soldiers in 2014. Efforts to improve external cause coding are needed, and could be aligned with training on and enforcement of ICD-10 coding guidelines throughout the Military Health System.

  3. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  4. Modeling report of DYMOND code (DUPIC version)

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Yacout, Abdellatif M.

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc

  5. Modeling report of DYMOND code (DUPIC version)

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan [KAERI, Taejon (Korea, Republic of); Yacout, Abdellatif M [Argonne National Laboratory, Ilinois (United States)

    2003-04-01

    The DYMOND code employs the ITHINK dynamic modeling platform to assess the 100-year dynamic evolution scenarios for postulated global nuclear energy parks. Firstly, DYMOND code has been developed by ANL(Argonne National Laboratory) to perform the fuel cycle analysis of LWR once-through and LWR-FBR mixed plant. Since the extensive application of DYMOND code has been requested, the first version of DYMOND has been modified to adapt the DUPIC, MSR and RTF fuel cycle. DYMOND code is composed of three parts; the source language platform, input supply and output. But those platforms are not clearly distinguished. This report described all the equations which were modeled in the modified DYMOND code (which is called as DYMOND-DUPIC version). It divided into five parts;Part A deals model in reactor history which is included amount of the requested fuels and spent fuels. Part B aims to describe model of fuel cycle about fuel flow from the beginning to the end of fuel cycle. Part C is for model in re-processing which is included recovery of burned uranium, plutonium, minor actinide and fission product as well as the amount of spent fuels in storage and disposal. Part D is for model in other fuel cycle which is considered the thorium fuel cycle for MSR and RTF reactor. Part E is for model in economics. This part gives all the information of cost such as uranium mining cost, reactor operating cost, fuel cost etc.

  6. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  7. Application of startup/core management code system to YGN 3 startup testing

    International Nuclear Information System (INIS)

    Chi, Sung Goo; Hah, Yung Joon; Doo, Jin Yong; Kim, Dae Kyum

    1995-01-01

    YGN 3 is the first nuclear power plant in Korea to use the fixed incore detector system for startup testing and core management. The startup/core management code system was developed from existing ABB-C-E codes and applied for YGN 3 startup testing, especially for physics and CPC(Core Protection Calculator)/COLSS (Core Operating Limit Supervisory System) related testing. The startup/core management code system consists of startup codes which include the CEBASE, CECOR, CEFAST and CEDOPS, and startup data reduction codes which include FLOWRATE, COREPERF, CALMET, and VARTAV. These codes were implemented on an HP/Apollo model 9000 series 400 workstation at the YGN 3 site and successfully applied to startup testing and core management. The startup codes made a great contribution in upgrading the reliability of test results and reducing the test period by taking and analyzing core data automatically. The data reduction code saved the manpower and time for test data reduction and decreased the chance for error in the analysis. It is expected that this code system will make similar contributions for reducing the startup testing duration of YGN 4 and UCN3,4

  8. Applications of the lahet simulation code to relativistic heavy ion detectors

    Energy Technology Data Exchange (ETDEWEB)

    Waters, L.; Gavron, A. [Los Alamos National Lab., NM (United States)

    1991-12-31

    The Los Alamos High Energy Transport (LAHET) simulation code has been applied to test beam data from the lead/scintillator Participant Calorimeter of BNL AGS experiment E814. The LAHET code treats hadronic interactions with the LANL version of the Oak Ridge code HETC. LAHET has now been expanded to handle hadrons with kinetic energies greater than 5 GeV with the FLUKA code, while HETC is used exclusively below 2.0 GeV. FLUKA is phased in linearly between 2.0 and 5.0 GeV. Transport of electrons and photons is done with EGS4, and an interface to the Los Alamos HMCNP3B library based code is provided to analyze neutrons with kinetic energies less than 20 MeV. Excellent agreement is found between the test data and simulation, and results for 2.46 GeV/c protons and pions are illustrated in this article.

  9. Applications of the LAHET simulation code to relativistic heavy ion detectors

    International Nuclear Information System (INIS)

    Waters, L.S.; Gavron, A.

    1991-01-01

    The Los Alamos High Energy Transport (LAHET) simulation code has been applied to test beam data from the lead/scintillator Participant Calorimeter of BNL AGS experiment E814. The LAHET code treats hadronic interactions with the LANL version of the Oak Ridge code HETC. LAHET has now been expanded to handle hadrons with kinetic energies greater than 5 GeV with the FLUKA code, while HETC is used exclusively below 2.0 GeV. FLUKA is phased in linearly between 2.0 and 5.0 GeV. Transport of electrons and photons is done with EGS4, and an interface to the Los Alamos HMCNP3B library based code is provided to analyze neutrons with kinetic energies less than 20 MeV. Excellent agreement is found between the test data and simulation, and results for 2.46 GeV/c protons and pions are illustrated in this article

  10. GOTHIC code evaluation of alternative passive containment cooling features

    International Nuclear Information System (INIS)

    Gavrilas, M.; Todreas, E.N.; Driscoll, M.J.

    1996-01-01

    Reliance on passive cooling has become an important objective in containment design. Several reactor concepts have been set forth, which are equipped with entirely passively cooled containments. However, the problems that have to be overcome in rejecting the entire heat generated by a severe accident in a high-rating reactor (i.e. one with a rating greater than 1200 MW e ) have been found to be substantial and without obvious solutions. The GOTHIC code was verified and modified for containment cooling applications; optimal mesh sizes, computational time steps and applicable heat transfer correlations were examined. The effect of the break location on circulation patterns that develop inside the containment was also evaluated. The GOTHIC code was then employed to assess the effectiveness of several original heat rejection features that make it possible to cool high-rating containments. Two containment concepts were evaluated: one for a 1200 MW e new pressure tube light-water reactor, and one for a 1300 MW e pressurized-water reactor. The effectiveness of various containment configurations that include specific pressure-limiting features has been predicted. The best-performance configurations-worst-case-accident scenarios that were examined yielded peak pressures of less than 0.30 MPa for the 1200 MW e pressure tube light-water reactor, and less than 0.45 MPa for the 1300 MW e pressurized-water reactor. (orig.)

  11. Thermal-Hydraulic Analysis of SWAMUP Facility Using ATHLET-SC Code

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zidi; Cao, Zhen; Liu, Xiaojing, E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, Shanghai (China)

    2015-03-16

    During the loss of coolant accident (LOCA) of supercritical water-cooled reactor (SCWR), the pressure in the reactor system will undergo a rapid decrease from the supercritical pressure to the subcritical condition. This process is called trans-critical transients, which is of crucial importance for the LOCA analysis of SCWR. In order to simulate the trans-critical transient, a number of system codes for SCWR have been developed up to date. However, the validation work for the trans-critical models in these codes is still missing. The test facility Supercritical WAter MUltiPurpose loop (SWAMUP) with 2 × 2 rod bundle in Shanghai Jiao Tong University (SJTU) will be applied to provide test data for code validation. Some pre-test calculations are important and necessary to show the feasibility of the experiment. In this study, trans-critical transient analysis is performed for the SWAMUP facility with the system code ATHLET-SC, which is modified in SJTU, for supercritical water system. This paper presents the system behavior, e.g., system pressure, coolant mass flow, cladding temperature during the depressurization. The effects of some important parameters such as heating power, depressurization rate on the system characteristics are also investigated in this paper. Additionally, some sensitivities study of the code models, e.g., heat transfer coefficient, critical heat flux correlation are analyzed and discussed. The results indicate that the revised system code ATHLET-SC is capable of simulating thermal-hydraulic behavior during the trans-critical transient. According to the results, the cladding temperature during the transient is kept at a low value. However, the pressure difference of the heat exchanger after depressurization could reach 6 MPa, which should be considered in the experiment.

  12. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  13. Nuclear data libraries for Tripoli-3.5 code; Bibliotheques de donnees nucleaires pour le code tripoli-3.5

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th

    2001-07-01

    The TRIPOLI-3 code uses multigroup nuclear data libraries generated using the NJOY-THEMIS suite of modules: for neutrons, they are produced from the ENDF/B-VI evaluations and cover the range between 20 MeV and 10{sup -5} eV, either in 315 groups and for one temperature, or in 3209 groups and for five temperatures; for gamma-rays, they are from JEF2 and are processed in groups between 14 MeV and keV. The probability tables used for the neutron transport calculations have been derived from the ENDF/B-VI evaluations using the CALENDF code. Cross sections for gamma production by neutron interaction (fission, capture or inelastic scattering) have been derived from ENDF/B-VI in 315 neutron groups and 75 gamma groups. The code also uses two response function libraries: for neutrons; based on several sources, in particular the dosimetry libraries IRDF/85 and IRDF/90; for gamma-rays it is based on the JEF2 evaluation and contains the kerma factors for all the elements and cross sections for all interactions. (author)

  14. Integrated severe accident containment analysis with the CONTAIN computer code

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Williams, D.C.; Rexroth, P.E.; Tills, J.L.

    1985-12-01

    Analysis of physical and radiological conditions iunside the containment building during a severe (core-melt) nuclear reactor accident requires quantitative evaluation of numerous highly disparate yet coupled phenomenologies. These include two-phase thermodynamics and thermal-hydraulics, aerosol physics, fission product phenomena, core-concrete interactions, the formation and combustion of flammable gases, and performance of engineered safety features. In the past, this complexity has meant that a complete containment analysis would require application of suites of separate computer codes each of which would treat only a narrower subset of these phenomena, e.g., a thermal-hydraulics code, an aerosol code, a core-concrete interaction code, etc. In this paper, we describe the development and some recent applications of the CONTAIN code, which offers an integrated treatment of the dominant containment phenomena and the interactions among them. We describe the results of a series of containment phenomenology studies, based upon realistic accident sequence analyses in actual plants. These calculations highlight various phenomenological effects that have potentially important implications for source term and/or containment loading issues, and which are difficult or impossible to treat using a less integrated code suite

  15. The evolution of the mitochondrial genetic code in arthropods revisited.

    Science.gov (United States)

    Abascal, Federico; Posada, David; Zardoya, Rafael

    2012-04-01

    A variant of the invertebrate mitochondrial genetic code was previously identified in arthropods (Abascal et al. 2006a, PLoS Biol 4:e127) in which, instead of translating the AGG codon as serine, as in other invertebrates, some arthropods translate AGG as lysine. Here, we revisit the evolution of the genetic code in arthropods taking into account that (1) the number of arthropod mitochondrial genomes sequenced has triplicated since the original findings were published; (2) the phylogeny of arthropods has been recently resolved with confidence for many groups; and (3) sophisticated probabilistic methods can be applied to analyze the evolution of the genetic code in arthropod mitochondria. According to our analyses, evolutionary shifts in the genetic code have been more common than previously inferred, with many taxonomic groups displaying two alternative codes. Ancestral character-state reconstruction using probabilistic methods confirmed that the arthropod ancestor most likely translated AGG as lysine. Point mutations at tRNA-Lys and tRNA-Ser correlated with the meaning of the AGG codon. In addition, we identified three variables (GC content, number of AGG codons, and taxonomic information) that best explain the use of each of the two alternative genetic codes.

  16. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  17. Multidimensional electron-photon transport with standard discrete ordinates codes

    International Nuclear Information System (INIS)

    Drumm, C.R.

    1995-01-01

    A method is described for generating electron cross sections that are compatible with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electronphoton transport problems

  18. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Austregesilo, H.; Velkov, K. [GRS, Garching (Germany)] [and others

    1997-07-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.

  19. Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes

    International Nuclear Information System (INIS)

    Langenbuch, S.; Austregesilo, H.; Velkov, K.

    1997-01-01

    The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes

  20. NIVRA's Verordening Gedragscode versus IFAC's Code of Ethics

    NARCIS (Netherlands)

    Moleveld, W.P.; Majoor, G.C.M. (Barbara)

    2008-01-01

    In dit artikel worden de punten in kaart gebracht waarop de Verordening Gedragscode (VGC) conceptueel verschilt van de Code of Ethics for professional accountants (CoE) van de International Federation of Accountants (IFAC). Ook gaan we in op de betekenis van deze verschillen1. Hoewel in eerste

  1. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  2. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  3. Interações analógico e digital móvel na mídia impressa: camadas informacionais na narrativa com QR Code, Aurasma e Realidade Aumentada

    Directory of Open Access Journals (Sweden)

    Fernando Firmino da Silva

    2013-08-01

    Full Text Available Este artigo discute a adoção de inovações no jornalismo impresso como QR Code, Aurasma, Realidade Aumentada, 3D como processos de complexificação das narrativas e das estratégias construídas como modelo de negócios nas organizações jornalísticas a partir das apropriações das tecnologias móveis digitais. As camadas informacionais (interativas, multimídia, metadados oriundas desses recursos compartilham da noção defendida por McLuhan (2005 de que os meios se refletem um no outro, reinventam-se. A abordagem se centra na análise dessas interações analógico-digital e de natureza móvel partindo da observação de que o jornalismo impresso procura introduzir nos seus projetos gráficos essa perspectiva visando se modernizar para "fisgar" ou manter os leitores através da conexão entre plataformas com o incremento às narrativas de recursos dinâmicos que vão além do próprio suporte, como nos casos em debate.

  4. Performance Analysis of CRC Codes for Systematic and Nonsystematic Polar Codes with List Decoding

    Directory of Open Access Journals (Sweden)

    Takumi Murata

    2018-01-01

    Full Text Available Successive cancellation list (SCL decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC decoding, provided that proper cyclic redundancy-check (CRC codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.

  5. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  6. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  7. Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.

    Science.gov (United States)

    Acha, Joana; Perea, Manuel

    2010-01-01

    Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.

  8. Strict optical orthogonal codes for purely asynchronous code-division multiple-access applications

    Science.gov (United States)

    Zhang, Jian-Guo

    1996-12-01

    Strict optical orthogonal codes are presented for purely asynchronous optical code-division multiple-access (CDMA) applications. The proposed code can strictly guarantee the peaks of its cross-correlation functions and the sidelobes of any of its autocorrelation functions to have a value of 1 in purely asynchronous data communications. The basic theory of the proposed codes is given. An experiment on optical CDMA systems is also demonstrated to verify the characteristics of the proposed code.

  9. Verbal-spatial and visuospatial coding of power-space interactions.

    Science.gov (United States)

    Dai, Qiang; Zhu, Lei

    2018-05-10

    A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  11. Quantum Codes From Negacyclic Codes over Group Ring ( Fq + υFq) G

    International Nuclear Information System (INIS)

    Koroglu, Mehmet E.; Siap, Irfan

    2016-01-01

    In this paper, we determine self dual and self orthogonal codes arising from negacyclic codes over the group ring ( F q + υF q ) G . By taking a suitable Gray image of these codes we obtain many good parameter quantum error-correcting codes over F q . (paper)

  12. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  13. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  14. Two "dual" families of Nearly-Linear Codes over ℤ p , p odd

    NARCIS (Netherlands)

    Asch, van A.G.; Tilborg, van H.C.A.

    2001-01-01

    Since the paper by Hammons e.a. [1], various authors have shown an enormous interest in linear codes over the ring Z4. A special weight function on Z4 was introduced and by means of the so called Gray map ¿ : Z4¿Z2 2 a relation was established between linear codes over Z4 and certain interesting

  15. ITER Dynamic Tritium Inventory Modeling Code

    International Nuclear Information System (INIS)

    Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.

    2005-01-01

    A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER

  16. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  17. Interferometric key readable security holograms with secrete-codes

    Indian Academy of Sciences (India)

    2Department of Applied Physics, Guru Jambheshwar University of Science & Technology,. Hisar 125 001, India. *E-mail: aka1945@rediffmail.com. MS received 21 ... A new method is described to create secrete-codes in the security holograms for enhancing ... ing, or falsification of the valuable products and documents.

  18. Performance analysis of WS-EWC coded optical CDMA networks with/without LDPC codes

    Science.gov (United States)

    Huang, Chun-Ming; Huang, Jen-Fa; Yang, Chao-Chin

    2010-10-01

    One extended Welch-Costas (EWC) code family for the wavelength-division-multiplexing/spectral-amplitude coding (WDM/SAC; WS) optical code-division multiple-access (OCDMA) networks is proposed. This system has a superior performance as compared to the previous modified quadratic congruence (MQC) coded OCDMA networks. However, since the performance of such a network is unsatisfactory when the data bit rate is higher, one class of quasi-cyclic low-density parity-check (QC-LDPC) code is adopted to improve that. Simulation results show that the performance of the high-speed WS-EWC coded OCDMA network can be greatly improved by using the LDPC codes.

  19. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  20. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  1. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. Bad-good constraints on a polarity correspondence account for the spatial-numerical association of response codes (SNARC) and markedness association of response codes (MARC) effects.

    Science.gov (United States)

    Leth-Steensen, Craig; Citta, Richie

    2016-01-01

    Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important

  4. Codes of conduct in public schools: a legal perspective

    African Journals Online (AJOL)

    Erna Kinsey

    cation change in South Africa, particularly the transformation of public schools ... been granted legal personality to act as "juristic persons" (i.e. legal persons ..... cess, a decision is made to amend, or repeal, the code of conduct, de- pending on ...

  5. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  6. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  7. Review of the margins for ASME code fatigue design curve - effects of surface roughness and material variability

    International Nuclear Information System (INIS)

    Chopra, O. K.; Shack, W. J.

    2003-01-01

    The ASME Boiler and Pressure Vessel Code provides rules for the construction of nuclear power plant components. The Code specifies fatigue design curves for structural materials. However, the effects of light water reactor (LWR) coolant environments are not explicitly addressed by the Code design curves. Existing fatigue strain-vs.-life ((var e psilon)-N) data illustrate potentially significant effects of LWR coolant environments on the fatigue resistance of pressure vessel and piping steels. This report provides an overview of the existing fatigue (var e psilon)-N data for carbon and low-alloy steels and wrought and cast austenitic SSs to define the effects of key material, loading, and environmental parameters on the fatigue lives of the steels. Experimental data are presented on the effects of surface roughness on the fatigue life of these steels in air and LWR environments. Statistical models are presented for estimating the fatigue (var e psilon)-N curves as a function of the material, loading, and environmental parameters. Two methods for incorporating environmental effects into the ASME Code fatigue evaluations are discussed. Data available in the literature have been reviewed to evaluate the conservatism in the existing ASME Code fatigue evaluations. A critical review of the margins for ASME Code fatigue design curves is presented

  8. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.

    Science.gov (United States)

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.

  9. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  10. Tokamak Simulation Code modeling of NSTX

    International Nuclear Information System (INIS)

    Jardin, S.C.; Kaye, S.; Menard, J.; Kessel, C.; Glasser, A.H.

    2000-01-01

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption

  11. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  12. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  13. Development of the next generation reactor analysis code system, MARBLE

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Hazama, Taira; Nagaya, Yasunobu; Chiba, Go; Kugo, Teruhiko; Ishikawa, Makoto; Tatsumi, Masahiro; Hirai, Yasushi; Hyoudou, Hideaki; Numata, Kazuyuki; Iwai, Takehiko; Jin, Tomoyuki

    2011-03-01

    A next generation reactor analysis code system, MARBLE, has been developed. MARBLE is a successor of the fast reactor neutronics analysis code systems, JOINT-FR and SAGEP-FR (conventional systems), which were developed for so-called JUPITER standard analysis methods. MARBLE has the equivalent analysis capability to the conventional system because MARBLE can utilize sub-codes included in the conventional system without any change. On the other hand, burnup analysis functionality for power reactors is improved compared with the conventional system by introducing models on fuel exchange treatment and control rod operation and so on. In addition, MARBLE has newly developed solvers and some new features of burnup calculation by the Krylov sub-space method and nuclear design accuracy evaluation by the extended bias factor method. In the development of MARBLE, the object oriented technology was adopted from the view-point of improvement of the software quality such as flexibility, expansibility, facilitation of the verification by the modularization and assistance of co-development. And, software structure called the two-layer system consisting of scripting language and system development language was applied. As a result, MARBLE is not an independent analysis code system which simply receives input and returns output, but an assembly of components for building an analysis code system (i.e. framework). Furthermore, MARBLE provides some pre-built analysis code systems such as the fast reactor neutronics analysis code system. SCHEME, which corresponds to the conventional code and the fast reactor burnup analysis code system, ORPHEUS. (author)

  14. Mathematical fundamentals for the noise immunity of the genetic code.

    Science.gov (United States)

    Fimmel, Elena; Strüngmann, Lutz

    2018-02-01

    degeneracy of the genetic code are essential and give evidence of substantial advantages of the natural code over other possible ones. In the present chapter we will present a recent approach to explain the degeneracy of the genetic code by algorithmic methods from bioinformatics, and discuss its biological consequences. The biologists recognised this problem immediately after the detection of the non-overlapping structure of the genetic code, i.e., coding sequences are to be read in a unique way determined by their reading frame. But how does the reading head of the ribosome recognises an error in the grouping of codons, caused by e.g. insertion or deletion of a base, that can be fatal during the translation process and may result in nonfunctional proteins? In this chapter we will discuss possible solutions to the frameshift problem with a focus on the theory of so-called circular codes that were discovered in large gene populations of prokaryotes and eukaryotes in the early 90s. Circular codes allow to detect a frameshift of one or two positions and recently a beautiful theory of such codes has been developed using statistics, group theory and graph theory. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Code comparison for accelerator design and analysis

    International Nuclear Information System (INIS)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary in these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs

  16. A PC version of the Monte Carlo criticality code OMEGA

    International Nuclear Information System (INIS)

    Seifert, E.

    1996-05-01

    A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)

  17. THYDE-NEU: Nuclear reactor system analysis code

    International Nuclear Information System (INIS)

    Asahi, Yoshiro

    2002-03-01

    THYDE-NEU is applicable not only to transient analyses, but also to steady state analyses of nuclear reactor systems (NRSs). In a steady state analysis, the code generates a solution satisfying the transient equations without external disturbances. In a transient analysis, the code calculates temporal NRS behaviors in response to various external disturbances in such a way that mass and energy of the coolant as well as the number of neutrons conserve. The first half of the report is the description of the methods and models for use in the THYDE-NEU code, i.e., (1) the thermal-hydraulic network model, (2) the spatial kinetics model, (3) the heat sources in fuel, (4) the heat transfer correlations, (5) the mechanical behavior of clad and fuel, and (6) the steady state adjustment. The second half of the report is the users' mannual containing the items; (1) the program control, (2) the input requirements, (3) the execution of THYDE-NEU jobs, (4) the output specifications and (5) the sample calculation. (author)

  18. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  19. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  20. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  1. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  2. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  3. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  4. Study and application of Dot 3.5 computer code in radiation shielding problems

    International Nuclear Information System (INIS)

    Otto, A.C.; Mendonca, A.G.; Maiorino, J.R.

    1983-01-01

    The application of nuclear transportation code S sub(N), Dot 3.5, to radiation shielding problems is revised. Aiming to study the better available option (convergence scheme, calculation mode), of DOT 3.5 computer code to be applied in radiation shielding problems, a standard model from 'Argonne Code Center' was selected and a combination of several calculation options to evaluate the accuracy of the results and the computational time was used, for then to select the more efficient option. To illustrate the versatility and efficacy in the application of the code for tipical shielding problems, the streaming neutrons calculation along a sodium coolant channel is ilustrated. (E.G.) [pt

  5. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  6. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  7. Tri-code inductance control rod position indicator with several multi-coding-bars

    International Nuclear Information System (INIS)

    Shi Jibin; Jiang Yueyuan; Wang Wenran

    2004-01-01

    A control rod position indicator named as tri-code inductance control rod position indicator with multi-coding-bars, which possesses simple structure, reliable operation and high precision, is developed. The detector of the indicator is composed of K coils, a compensatory coil and K coding bars. Each coding bar consists of several sections of strong magnetic cores, several sections of weak magnetic cores and several sections of non-magnetic portions. As the control rod is withdrawn, the coding bars move in the center of the coils respectively, while the constant alternating current passes the coils and makes them to create inductance alternating voltage signals. The outputs of the coils are picked and processed, and the tri-codes indicating rod position can be gotten. Moreover, the coding principle of the detector and its related structure are introduced. The analysis shows that the indicator owns more advantage over the coils-coding rod position indicator, so it can meet the demands of the rod position indicating in nuclear heating reactor (NHR). (authors)

  8. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  9. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  10. Obituary: Arthur Dodd Code (1923-2009)

    Science.gov (United States)

    Marché, Jordan D., II

    2009-12-01

    future course of stellar astronomy," a prediction strongly borne out in the decades that followed. In 1959, Code founded the Space Astronomy Laboratory (SAL) within the UW Department of Astronomy. Early photometric and spectrographic equipment was test-flown aboard NASA's X-15 rocket plane and Aerobee sounding rockets. Along with other SAL personnel, including Theodore E. Houck, Robert C. Bless, and John F. McNall, Code (as principal investigator) was responsible for the design of the Wisconsin Experiment Package (WEP) as one of two suites of instruments to be flown aboard the Orbiting Astronomical Observatory (OAO), which represented a milestone in the advent of space astronomy. With its seven reflecting telescopes feeding five filter photometers and two scanning spectrometers, WEP permitted the first extended observations in the UV portion of the spectrum. After the complete failure of the OAO-1 spacecraft (launched in 1966), OAO-2 was successfully launched on 7 December 1968 and gathered data on over a thousand celestial objects during the next 50 months, including stars, nebulae, galaxies, planets, and comets. These results appeared in a series of more than 40 research papers, chiefly in the Ap.J., along with the 1972 monograph, The Scientific Results from the Orbiting Astronomical Observatory (OAO-2), edited by Code. Between the OAO launches, other SAL colleagues of Code developed the Wisconsin Automatic Photoelectric Telescope (or APT), the first computer-controlled (or "robotic") telescope. Driven by a PDP-8 mini-computer, it routinely collected atmospheric extinction data. Code was also chosen principal investigator for the Wisconsin Ultraviolet Photo-Polarimeter Experiment (or WUPPE). This used a UV-sensitive polarimeter designed by Kenneth Nordsieck that was flown twice aboard the space shuttles in 1990 and 1995. Among other findings, WUPPE observations demonstrated that interstellar dust does not appreciably change the direction of polarization of starlight

  11. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  12. Validation of the ATHLET-code 2.1A by calculation of the ECTHOR experiment; Validierung des ATHLET-Codes 2.1A anhand des Einzeleffekt-Tests ECTHOR

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Andreas; Sarkadi, Peter; Schaffrath, Andreas [TUEV NORD SysTec GmbH und Co. KG, Hamburg (Germany)

    2010-05-15

    Before a numerical code (e.g. ATHLET) is used for simulation of physical phenomena being new or unknown for the code and/or the user, the user ensures the applicability of the code and his own experience of handling with it by means of a so-called validation. Parametric studies with the code are executed for that matter and the results have to be compared with verified experimental data. Corresponding reference values are available in terms of so-called single-effect-tests (e.g. ECTHOR). In this work the system-code ATHLET Mod. 2.1 Cycle A is validated by post test calculation of the ECTHOR experiment due to the above named aspects. With the ECTHOR-tests the clearing of a water-filled model of a loop seal by means of an air-stream was investigated including momentum exchange at the phase interface under adiabatic and atmospheric conditions. The post test calculations show that the analytical results meet the experimental data within the reproducibility of the experiments. Further findings of the parametric studies are: - The experimental results obtained with the system water-air (ECTHOR) can be assigned to a water-steam-system, if the densities of the phases are equal in both cases. - The initial water level in the loop seal has no influence on the results as long as the gas mass flow is increased moderately. - The loop seal is appropriately nodalized if the mean length of the control volumes accords approx. 1.5 tim es the hydraulic pipe diameter. (orig.)

  13. Validation of the ATHLET-code 2.1A by calculation of the ECTHOR experiment; Validierung des ATHLET-Codes 2.1A anhand des Einzeleffekt-Tests ECTHOR

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Andreas; Sarkadi, Peter; Schaffrath, Andreas [TUEV NORD SysTec GmbH und Co. KG, Hamburg (Germany)

    2010-06-15

    Before a numerical code (e.g. ATHLET) is used for simulation of physical phenomena being new or unknown for the code and/or the user, the user ensures the applicability of the code and his own experience of handling with it by means of a so-called validation. Parametric studies with the code are executed for that matter und the results have to be compared with verified experimental data. Corresponding reference values are available in terms of so-called single-effect-tests (e.g. ECTHOR). In this work the system-code ATHLET Mod. 2.1 Cycle A is validated by post test calculation of the ECTHOR experiment due to the above named aspects. With the ECTHOR-tests the clearing of a water-filled model of a loop seal by means of an air-stream was investigated including momentum exchange at the phase interface under adiabatic and atmospheric conditions. The post test calculations show that the analytical results meet the experimental data within the reproducibility of the experiments. Further findings of the parametric studies are: - The experimental results obtained with the system water-air (ECTHOR) can be assigned to a water-steam-system, if the densities of the phases are equal in both cases. - The initial water level in the loop seal has no influence on the results as long as the gas mass flow is increased moderately. - The loop seal is appropriately nodalized if the mean length of the control volumes accords approx. 1.5 times the hydraulic pipe diameter. (orig.)

  14. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  15. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  16. An upper bound for codes for the noisy two-access binary adder channel

    NARCIS (Netherlands)

    Tilborg, van H.C.A.

    1986-01-01

    Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +

  17. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  18. Vectorization, parallelization and porting of nuclear codes. Vectorization and parallelization. Progress report fiscal 1999

    Energy Technology Data Exchange (ETDEWEB)

    Adachi, Masaaki; Ogasawara, Shinobu; Kume, Etsuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishizuki, Shigeru; Nemoto, Toshiyuki; Kawasaki, Nobuo; Kawai, Wataru [Fujitsu Ltd., Tokyo (Japan); Yatake, Yo-ichi [Hitachi Ltd., Tokyo (Japan)

    2001-02-01

    Several computer codes in the nuclear field have been vectorized, parallelized and trans-ported on the FUJITSU VPP500 system, the AP3000 system, the SX-4 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 18 codes in fiscal 1999. These results are reported in 3 parts, i.e., the vectorization and the parallelization part on vector processors, the parallelization part on scalar processors and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of Relativistic Molecular Orbital Calculation code RSCAT, a microscopic transport code for high energy nuclear collisions code JAM, three-dimensional non-steady thermal-fluid analysis code STREAM, Relativistic Density Functional Theory code RDFT and High Speed Three-Dimensional Nodal Diffusion code MOSRA-Light on the VPP500 system and the SX-4 system are described. (author)

  19. E-MONEY BANYUWANGI TOURISM : QR CODE SEBAGAI ALAT TRANSAKSI DI WISATA PULAU MERAH

    Directory of Open Access Journals (Sweden)

    Yashinta Setyowati

    2017-11-01

    Full Text Available This study aims to design a sales system using QR Code as a means of transactions in the tour of Red Island. One of the tourist attractions in Banyuwangi often visited by foreign and domestic tourists is the Red Island. The advantages of the Red Island is the waves that are always stable for surfing, clean and conducive environment, and amazing scenery. However, due to the large number of visitors, it is not comparable with the sellers and service providers, the low educational background makes local residents difficult to communicate especially with foreign tourists. This research uses qualitative exploratory and action research methods conducted in Pulau Merah. Data collection is done through interviews and in-depth observation. Data analysis is done to generate a system requirement that will be designed. The result of this research is the design of sales accounting information system using QR Code as transaction tool. The contribution of this research is to increase the income of the people around Red Island, as well as local government through tax revenue, also to improve the competitiveness of Red Island into a natural tourist area of technology.

  20. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  1. Comparison of accelerator codes for a RHIC [Relativistic Heavy Ion Collider] lattice

    International Nuclear Information System (INIS)

    Milutinovic, J.; Ruggiero, A.G.

    1989-01-01

    We present the results of comparison of performances of several tracking or/and analysis codes. The basic purpose of this program was to assess reliability and accuracy of these codes, i.e., to determine the so-called ''error bars'' for the predicted values of tunes and other lattice functions as a minimum and, if possible, to discover potential difficulties with underlying physical models in these codes, inadequate algorithms, residual bugs and the like. Not only have we been able to determine the error bars, which for instance for the tunes at dp/p = +1% are Δν/sub ξ/ = 0.0027, Δν/sub y/ = 0.0010, but also our program has brought about improvements of several codes. 8 refs., 3 figs., 2 tabs

  2. Teaching Billing and Coding to Medical Students: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiaxin Tran

    2013-08-01

    Full Text Available Complex billing practices cost the US healthcare system billions of dollars annually. Coding for outpatient office visits [known as Evaluation & Management (E&M services] is commonly particularly fraught with errors. The best way to insure proper billing and coding by practicing physicians is to teach this as part of the medical school curriculum. Here, in a pilot study, we show that medical students can learn well the basic principles from lectures. This approach is easy to implement into a medical school curriculum.

  3. [INVITED] Luminescent QR codes for smart labelling and sensing

    Science.gov (United States)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  4. Development of a general coupling interface for the fuel performance code transuranus tested with the reactor dynamic code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.

    2013-01-01

    Several institutions plan to couple the fuel performance code TRANSURANUS developed by the European Institute for Transuranium Elements with their own codes. One of these codes is the reactor dynamic code DYN3D maintained by the Helmholtz-Zentrum Dresden - Rossendorf. DYN3D was developed originally for VVER type reactors and was extended later to western type reactors. Usually, the fuel rod behavior is modeled in thermal hydraulics and neutronic codes in a simplified manner. The main idea of this coupling is to describe the fuel rod behavior in the frame of core safety analysis in a more detailed way, e.g. including the influence of the high burn-up structure, geometry changes and fission gas release. It allows to take benefit from the improved computational power and software achieved over the last two decades. The coupling interface was developed in a general way from the beginning. Thence it can be easily used also by other codes for a coupling with TRANSURANUS. The user can choose between a one-way as well as a two-way online coupling option. For a one-way online coupling, DYN3D provides only the time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, but the fuel performance code doesn’t transfer any variable back to DYN3D. In a two-way online coupling, TRANSURANUS in addition transfers parameters like fuel temperature and cladding temperature back to DYN3D. This list of variables can be extended easily by geometric and further variables of interest. First results of the code system DYN3D-TRANSURANUS will be presented for a control rod ejection transient in a modern western type reactor. Pre-analyses show already that a detailed fuel rod behavior modeling will influence the thermal hydraulics and thence also the neutronics due to the Doppler reactivity effect of the fuel temperature. The coupled code system has therefore a potential to improve the assessment of safety criteria. The developed code system DYN3D-TRANSURANUS can be used also

  5. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  6. EG and G and NASA face seal codes comparison

    Science.gov (United States)

    Basu, Prit

    1994-01-01

    This viewgraph presentation presents the following results for the example comparison: EG&G code with face deformations suppressed and SPIRALG agree well with each other as well as with the experimental data; 0 rpm stiffness data calculated by EG&G code are about 70-100 percent lower than that by SPIRALG; there is no appreciable difference between 0 rpm and 16,000 rpm stiffness and damping coefficients calculated by SPIRALG; and the film damping above 500 psig calculated by SPIRALG is much higher than the O-Ring secondary seal damping (e.g. 50 lbf.s/in).

  7. SAFIRE: A systems analysis code for ICF [inertial confinement fusion] reactor economics

    International Nuclear Information System (INIS)

    McCarville, T.J.; Meier, W.R.; Carson, C.F.; Glasgow, B.B.

    1987-01-01

    The SAFIRE (Systems Analysis for ICF Reactor Economics) code incorporates analytical models for scaling the cost and performance of several inertial confinement fusion reactor concepts for electric power. The code allows us to vary design parameters (e.g., driver energy, chamber pulse rate, net electric power) and evaluate the resulting change in capital cost of power plant and the busbar cost of electricity. The SAFIRE code can be used to identify the most attractive operating space and to identify those design parameters with the greatest leverage for improving the economics of inertial confinement fusion electric power plants

  8. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  9. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  10. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  11. First steps towards a validation of the new burnup and depletion code TNT

    Energy Technology Data Exchange (ETDEWEB)

    Herber, S.C.; Allelein, H.J. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6); Friege, N. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Kasselmann, S. [Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6)

    2012-11-01

    In the frame of the fusion of the core design calculation capabilities, represented by V.S.O.P., and the accident calculation capabilities, represented by MGT(-3D), the successor of the TINTE code, difficulties were observed in defining an interface between a program backbone and the ORIGEN code respectively the ORIGENJUEL code. The estimation of the effort of refactoring the ORIGEN code or to write a new burnup code from scratch, led to the decision that it would be more efficient writing a new code, which could benefit from existing programming and software engineering tools from the computer code side and which can use the latest knowledge of nuclear reactions, e.g. consider all documented reaction channels. Therefore a new code with an object-oriented approach was developed at IEK-6. Object-oriented programming is currently state of the art and provides mostly an improved extensibility and maintainability. The new code was named TNT which stands for Topological Nuclide Transformation, since the code makes use of the real topology of the nuclear reactions. Here we want to present some first validation results from code to code benchmarks with the codes ORIGEN V2.2 and FISPACT2005 and whenever possible analytical results also used for the comparison. The 2 reference codes were chosen due to their high reputation in the field of fission reactor analysis (ORIGEN) and fusion facilities (FISPACT). (orig.)

  12. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  13. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  14. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  15. Review of SKB's Code Documentation and Testing

    International Nuclear Information System (INIS)

    Hicks, T.W.

    2005-01-01

    safety assessment. The projects studied require that software is managed under a rigorous graded approach based on a software life-cycle methodology, with documentation requirements that include user's manuals and verification and validation documents. These requirements also include procedures for the use of external codes. Under the graded approach, reduced versions of the software life-cycle are adopted for simple codes, such as those that can be independently verified by inspection or hand calculation. SKB should provide details of its software QA procedures covering different categories of software (e.g., internal, commercial, academic, and simple codes). In order to gain greater understanding and confidence in, and become more familiar with SKB's codes, SKI could consider testing some of SKB's codes against its own codes. This would also serve as a useful background to any future sensitivity analyses that SKI might conduct with these codes. Further, SKI could review its own software QA procedures and the required extent of documentation and testing of its own codes

  16. Development of Probabilistic Internal Dosimetry Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kwon, Tae-Eun [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Lee, Jai-Ki [Korean Association for Radiation Protection, Seoul (Korea, Republic of)

    2017-02-15

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5{sup th}, 5{sup th}, median, 95{sup th}, and 97.5{sup th} percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various

  17. Development of Probabilistic Internal Dosimetry Computer Code

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-01-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5 th , 5 th , median, 95 th , and 97.5 th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases

  18. How to Crack the Sugar Code.

    Science.gov (United States)

    Gabius, H-J

    2017-01-01

    The known ubiquitous presence of glycans fulfils an essential prerequisite for fundamental roles in cell sociology. Since carbohydrates are chemically predestined to form biochemical messages of a maximum of structural diversity in a minimum of space, coding of biological information by sugars is the reason for the broad occurrence of cellular glycoconjugates. Their glycans originate from sophisticated enzymatic assembly and dynamically adaptable remodelling. These signals are read and translated into effects by receptors (lectins). The functional pairing between lectins and their counterreceptor(s) is highly specific, often orchestrated by intimate co-regulation of the receptor, the cognate glycan and the bioactive scaffold (e.g., an integrin). Bottom-up approaches, teaming up synthetic and supramolecular chemistry to prepare fully programmable nanoparticles as binding partners with systematic network analysis of lectins and rational design of variants, enable us to delineate the rules of the sugar code.

  19. a model for quantity estimation for multi-coded team events

    African Journals Online (AJOL)

    Participation in multi-coded sports events often involves travel to international ... Medication use by Team south africa during the XXVIIIth olympiad: a model .... individual sports included in the programme (e.g. athletes involved in contact sports ...

  20. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    Science.gov (United States)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  1. Benchmark studies of BOUT++ code and TPSMBI code on neutral transport during SMBI

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.H. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Z.H., E-mail: zhwang@swip.ac.cn [Southwestern Institute of Physics, Chengdu 610041 (China); Guo, W., E-mail: wfguo@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Ren, Q.L. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Sun, A.P.; Xu, M.; Wang, A.K. [Southwestern Institute of Physics, Chengdu 610041 (China); Xiang, N. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China)

    2017-06-09

    SMBI (supersonic molecule beam injection) plays an important role in tokamak plasma fuelling, density control and ELM mitigation in magnetic confinement plasma physics, which has been widely used in many tokamaks. The trans-neut module of BOUT++ code is the only large-scale parallel 3D fluid code used to simulate the SMBI fueling process, while the TPSMBI (transport of supersonic molecule beam injection) code is a recent developed 1D fluid code of SMBI. In order to find a method to increase SMBI fueling efficiency in H-mode plasma, especially for ITER, it is significant to first verify the codes. The benchmark study between the trans-neut module of BOUT++ code and the TPSMBI code on radial transport dynamics of neutral during SMBI has been first successfully achieved in both slab and cylindrical coordinates. The simulation results from the trans-neut module of BOUT++ code and TPSMBI code are consistent very well with each other. Different upwind schemes have been compared to deal with the sharp gradient front region during the inward propagation of SMBI for the code stability. The influence of the WENO3 (weighted essentially non-oscillatory) and the third order upwind schemes on the benchmark results has also been discussed. - Highlights: • A 1D model of SMBI has developed. • Benchmarks of BOUT++ and TPSMBI codes have first been finished. • The influence of the WENO3 and the third order upwind schemes on the benchmark results has also been discussed.

  2. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  3. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  4. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  5. Deciphering Neural Codes of Memory during Sleep

    Science.gov (United States)

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  6. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  7. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  8. Multiplicative Structure and Hecke Rings of Generator Matrices for Codes over Quotient Rings of Euclidean Domains

    Directory of Open Access Journals (Sweden)

    Hajime Matsui

    2017-12-01

    Full Text Available In this study, we consider codes over Euclidean domains modulo their ideals. In the first half of the study, we deal with arbitrary Euclidean domains. We show that the product of generator matrices of codes over the rings mod a and mod b produces generator matrices of all codes over the ring mod a b , i.e., this correspondence is onto. Moreover, we show that if a and b are coprime, then this correspondence is one-to-one, i.e., there exist unique codes over the rings mod a and mod b that produce any given code over the ring mod a b through the product of their generator matrices. In the second half of the study, we focus on the typical Euclidean domains such as the rational integer ring, one-variable polynomial rings, rings of Gaussian and Eisenstein integers, p-adic integer rings and rings of one-variable formal power series. We define the reduced generator matrices of codes over Euclidean domains modulo their ideals and show their uniqueness. Finally, we apply our theory of reduced generator matrices to the Hecke rings of matrices over these Euclidean domains.

  9. Suture Coding: A Novel Educational Guide for Suture Patterns.

    Science.gov (United States)

    Gaber, Mohamed; Abdel-Wahed, Ramadan

    2015-01-01

    This study aims to provide a helpful guide to perform tissue suturing successfully using suture coding-a method for identification of suture patterns and techniques by giving full information about the method of application of each pattern using numbers and symbols. Suture coding helps construct an infrastructure for surgical suture science. It facilitates the easy understanding and learning of suturing techniques and patterns as well as detects the relationship between the different patterns. Guide points are fixed on both edges of the wound to act as a guideline to help practice suture pattern techniques. The arrangement is fixed as 1-3-5-7 and a-c-e-g on one side (whether right or left) and as 2-4-6-8 and b-d-f-h on the other side. Needle placement must start from number 1 or letter "a" and continue to follow the code till the end of the stitching. Some rules are created to be adopted for the application of suture coding. A suture trainer containing guide points that simulate the coding process is used to facilitate the learning of the coding method. (120) Is the code of simple interrupted suture pattern; (ab210) is the code of vertical mattress suture pattern, and (013465)²/3 is the code of Cushing suture pattern. (0A1) Is suggested as a surgical suture language that gives the name and type of the suture pattern used to facilitate its identification. All suture patterns known in the world should start with (0), (A), or (1). There is a relationship between 2 or more surgical patterns according to their codes. It can be concluded that every suture pattern has its own code that helps in the identification of its type, structure, and method of application. Combination between numbers and symbols helps in the understanding of suture techniques easily without complication. There are specific relationships that can be identified between different suture patterns. Coding methods facilitate suture patterns learning process. The use of suture coding can be a good

  10. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  11. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  12. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  13. Modifications to the Monte Carlo neutronics code MONK

    International Nuclear Information System (INIS)

    Hutton, J.L.

    1979-09-01

    The Monte Carlo neutronics code MONK has been widely used for criticality calculations, and is one of the standard methods for assessing the safety of transport flasks and fuel storage facilities in the UK. Recently, attempts have been made to extend the range of applications of this calculational technique. In particular studies have been carried out using Monte Carlo to analyse reactor physics experiments. In these applications various shortcomings of the standard version MONK5 became apparent. The basic data library was found to be inadequate and additional estimates of parameters (eg power distribution) not normally included in criticality studies were required. These features which required improvement, primarily in the context of using the code for reactor physics calculations, are enumerated. To facilitate the use of the code as a reactor physics calculational tool a series of modifications have been carried out. The code has been modified so that the user can use group data tabulations of the cross sections instead of the present 'point' data values. The code can now interface with a number of reactor physics group data preparation schemes but in particular it can use WIMS-E interfaces as a source of group data. Details of the changes are outlined and a new version of MONK incorporating these modifications has been created. This version is called MONK5W. This paper provides a guide to the use of this version. The data input is described along with other details required to use this code on the Harwell IBM 3033. To aid the user, examples of calculations using the new facilities incorporated in MONK5W are given. (UK)

  14. Code-To-Code Benchmarking Of The Porflow And GoldSim Contaminant Transport Models Using A Simple 1-D Domain - 11191

    International Nuclear Information System (INIS)

    Hiergesell, R.; Taylor, G.

    2010-01-01

    An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one

  15. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  16. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  17. Sub-Transport Layer Coding

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2014-01-01

    Packet losses in wireless networks dramatically curbs the performance of TCP. This paper introduces a simple coding shim that aids IP-layer traffic in lossy environments while being transparent to transport layer protocols. The proposed coding approach enables erasure correction while being...... oblivious to the congestion control algorithms of the utilised transport layer protocol. Although our coding shim is indifferent towards the transport layer protocol, we focus on the performance of TCP when ran on top of our proposed coding mechanism due to its widespread use. The coding shim provides gains...

  18. Comparative analysis of SLB for OPR1000 by using MEDUSA and CESEC-III codes

    International Nuclear Information System (INIS)

    Park, Jong Cheol; Park, Chan Eok; Kim, Shin Whan

    2005-01-01

    The MEDUSA is a system thermal hydraulics code developed by Korea Power Engineering Company (KOPEC) for Non-LOCA and LOCA analysis, using two fluid, three-field governing equations for two phase flow. The detailed descriptions for the MEDUSA code are given in Reference. A lot of effort is now being made to investigate the applicability of the MEDUSA code especially to Non-LOCA analysis, by comparing the analysis results with those from the current licensing code, CESEC-III: The comparative simulations of Pressurizer Level Control System(PLCS) Malfunction and Feedwater Line Break(FLB), which have been accomplished by C.E.Park and M.T.Oh, respectively, already showed that the MEDUSA code is applicable to the analysis of Non-LOCA events. In this paper, detailed thermal hydraulic analyses for Steam Line Break(SLB) without loss of off-site power were performed using the MEDUSA code. The calculation results were also compared with the CESEC-III, 1000(OPR1000), for the purpose of the code verification

  19. Utility experience in code updating of equipment built to 1974 code, Section 3, Subsection NF

    International Nuclear Information System (INIS)

    Rao, K.R.; Deshpande, N.

    1990-01-01

    This paper addresses changes to ASME Code Subsection NF and reconciles the differences between the updated codes and the as built construction code, of ASME Section III, 1974 to which several nuclear plants have been built. Since Section III is revised every three years and replacement parts complying with the construction code are invariably not available from the plant stock inventory, parts must be procured from vendors who comply with the requirements of the latest codes. Aspects of the ASME code which reflect Subsection NF are identified and compared with the later Code editions and addenda, especially up to and including the 1974 ASME code used as the basis for the plant qualification. The concern of the regulatory agencies is that if later code allowables and provisions are adopted it is possible to reduce the safety margins of the construction code. Areas of concern are highlighted and the specific changes of later codes are discerned; adoption of which, would not sacrifice the intended safety margins of the codes to which plants are licensed

  20. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  1. GOTHIC code evaluation of alternative passive containment cooling features

    International Nuclear Information System (INIS)

    Gavrilas, M.; Hejzlar, P.; Todreas, N.E.; Driscoll, M.J.

    1994-01-01

    The GOTHIC code was employed to assess the effectiveness of several original heat rejection features that make it possible to cool large rating containments. The code was first verified and modified for specific containment cooling applications; optimal mesh sizes, computational time steps, and applicable heat transfer correlations were examined. The effect of the break location on circulation patterns that develop inside the containment was also evaluated. GOTHIC was then used to obtain performance predictions for two containment concepts: a 1200 MW e new pressure tube light water reactor, and a 1300 MW e pressurized water reactor. The effectiveness of various containment configurations that include specific pressure-limiting features have been predicted. For the 1200 MW e pressure tube light water reactor, the evaluated pressure-limiting features are: a large water pool connected to the calandria, large containment free volume and an air-convection annulus. For the 1300 MW e pressurized water reactor, an external moat, an internal water pool, and an air-convection annulus were evaluated. The performance of the proposed containment configurations is dependent on the extent of thermal stratification inside the containment. The best-performance configurations/worst-case-accident scenarios that were examined yielded peak pressures of less than 0.30 MPa for the 1200 MW e pressure tube light water reactor, and less than 0.45 MPa for the 1300 MW e pressurized water reactor. The low peak pressure predicted for the 1200 MW e pressure tube light water reactor can be in part attributed to its relatively large free volume, while the relatively high peak pressure predicted for the 1300 MW e pressurized water reactor can be attributed to its relatively small free volume (i.e., the size used was that of a pressurized water reactor containment designed with active heat removal features). (author)

  2. Ombuds' corner: Code of Conduct and e-mails

    CERN Multimedia

    Vincent Vuillemin

    2011-01-01

    In this series, the Bulletin aims to explain the role of the Ombuds at CERN by presenting practical examples of misunderstandings that could have been resolved by the Ombuds if he had been contacted earlier. Please note that, in all the situations we present, the names are fictitious and used only to improve clarity.   Luke* holds a key position in the coordination of a large project. He is also a recognized expert in modeling complicated structures. Because of his expertise in the field, he receives a considerable number of e-mails every day which he has trouble responding to in addition to his responsibilities of management and development. Constantly interrupted, he tends to answer his emails quickly, sometimes even in an instinctive way, which leads to somewhat laconic messages. One day he receives an e-mail from Dave* challenging some of the decisions taken by the project’s management. Luke agrees with Dave’s remarks, which seem justified given his own expertise of the su...

  3. Linking CATHENA with other computer codes through a remote process

    Energy Technology Data Exchange (ETDEWEB)

    Vasic, A.; Hanna, B.N.; Waddington, G.M. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Sabourin, G. [Atomic Energy of Canada Limited, Montreal, Quebec (Canada); Girard, R. [Hydro-Quebec, Montreal, Quebec (Canada)

    2005-07-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program

  4. Linking CATHENA with other computer codes through a remote process

    International Nuclear Information System (INIS)

    Vasic, A.; Hanna, B.N.; Waddington, G.M.; Sabourin, G.; Girard, R.

    2005-01-01

    'Full text:' CATHENA (Canadian Algorithm for THErmalhydraulic Network Analysis) is a computer code developed by Atomic Energy of Canada Limited (AECL). The code uses a transient, one-dimensional, two-fluid representation of two-phase flow in piping networks. CATHENA is used primarily for the analysis of postulated upset conditions in CANDU reactors; however, the code has found a wider range of applications. In the past, the CATHENA thermalhydraulics code included other specialized codes, i.e. ELOCA and the Point LEPreau CONtrol system (LEPCON) as callable subroutine libraries. The combined program was compiled and linked as a separately named code. This code organizational process is not suitable for independent development, maintenance, validation and version tracking of separate computer codes. The alternative solution to provide code development independence is to link CATHENA to other computer codes through a Parallel Virtual Machine (PVM) interface process. PVM is a public domain software package, developed by Oak Ridge National Laboratory and enables a heterogeneous collection of computers connected by a network to be used as a single large parallel machine. The PVM approach has been well accepted by the global computing community and has been used successfully for solving large-scale problems in science, industry, and business. Once development of the appropriate interface for linking independent codes through PVM is completed, future versions of component codes can be developed, distributed separately and coupled as needed by the user. This paper describes the coupling of CATHENA to the ELOCA-IST and the TROLG2 codes through a PVM remote process as an illustration of possible code connections. ELOCA (Element Loss Of Cooling Analysis) is the Industry Standard Toolset (IST) code developed by AECL to simulate the thermo-mechanical response of CANDU fuel elements to transient thermalhydraulics boundary conditions. A separate ELOCA driver program starts, ends

  5. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  6. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  7. Proton therapy Monte Carlo SRNA-VOX code

    Directory of Open Access Journals (Sweden)

    Ilić Radovan D.

    2012-01-01

    Full Text Available The most powerful feature of the Monte Carlo method is the possibility of simulating all individual particle interactions in three dimensions and performing numerical experiments with a preset error. These facts were the motivation behind the development of a general-purpose Monte Carlo SRNA program for proton transport simulation in technical systems described by standard geometrical forms (plane, sphere, cone, cylinder, cube. Some of the possible applications of the SRNA program are: (a a general code for proton transport modeling, (b design of accelerator-driven systems, (c simulation of proton scattering and degrading shapes and composition, (d research on proton detectors; and (e radiation protection at accelerator installations. This wide range of possible applications of the program demands the development of various versions of SRNA-VOX codes for proton transport modeling in voxelized geometries and has, finally, resulted in the ISTAR package for the calculation of deposited energy distribution in patients on the basis of CT data in radiotherapy. All of the said codes are capable of using 3-D proton sources with an arbitrary energy spectrum in an interval of 100 keV to 250 MeV.

  8. Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy

    Energy Technology Data Exchange (ETDEWEB)

    Nakarado, Gary L. [Regulatory Logic LLC, Golden, CO (United States)

    2017-02-22

    The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA, to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.

  9. Calculation codes in radioprotection, radio-physics and dosimetry

    International Nuclear Information System (INIS)

    Jan, S.; Laedermann, J.P.; Bochud, F.; Ferragut, A.; Bordy, J.M.; Parisi, L.L.; Abou-Khalil, R.; Longeot, M.; Kitsos, S.; Groetz, J.E.; Villagrasa, C.; Daures, J.; Martin, E.; Henriet, J.; Tsilanizara, A.; Farah, J.; Uyttenhove, W.; Perrot, Y.; De Carlan, L.; Vivier, A.; Kodeli, I.; Sayah, R.; Hadid, L.; Courageot, E.; Fritsch, P.; Davesne, E.; Michel, X.

    2010-01-01

    This document gathers the slides of the available presentations given during these conference days. Twenty seven presentations are assembled in the document and deal with: 1 - GATE: calculation code for medical imaging, radiotherapy and dosimetry (S. Jan); 2 - estimation of conversion factors for the measurement of the ambient dose equivalent rate by in-situ spectroscopy (J.P. Laedermann); 3 - geometry specific calibration factors for nuclear medicine activity meters (F. Bochud); 4 - Monte Carlo simulation of a rare gases measurement system - calculation and validation, ASGA/VGM system (A. Ferragut); 5 - design of a realistic radiation field for the calibration of the dosemeters used in interventional radiology/cardiology (medical personnel dosimetry) (J.M. Bordy); 6 - determination of the position and height of the KALINA facility chimney at CEA Cadarache (L.L. Parisi); 7 - MERCURAD TM - 3D simulation software for dose rates calculation (R. Abou-Khalil); 8 - PANTHERE - 3D software for gamma dose rates simulation of complex nuclear facilities (M. Longeot); 9 - radioprotection, from the design to the exploitation of radioactive materials transportation containers (S. Kitsos); 10 - post-simulation processing of MCNPX responses in neutron spectroscopy (J.E. Groetz); 11 - last developments of the Geant4 Monte Carlo code for trace amounts simulation in liquid water at the molecular scale (C. Villagrasa); 12 - Calculation of H p (3)/K air conversion coefficients using PENELOPE Monte-Carlo code and comparison with MCNP calculation results (J. Daures); 13 - artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy (E. Martin); 14 - use of case-based reasoning for the reconstruction and handling of voxelized fantoms (J. Henriet); 15 - resolution of the radioactive decay inverse problem for dose calculation in radioprotection (A. Tsilanizara); 16 - use of NURBS-type fantoms for the study of the morphological factors influencing the pulmonary

  10. G4-STORK: A Geant4-based Monte Carlo reactor kinetics simulation code

    International Nuclear Information System (INIS)

    Russell, Liam; Buijs, Adriaan; Jonkmans, Guy

    2014-01-01

    Highlights: • G4-STORK is a new, time-dependent, Monte Carlo code for reactor physics applications. • G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. • G4-STORK was designed to simulate short-term fluctuations in reactor cores. • G4-STORK is well suited for simulating sub- and supercritical assemblies. • G4-STORK was verified through comparisons with DRAGON and MCNP. - Abstract: In this paper we introduce G4-STORK (Geant4 STOchastic Reactor Kinetics), a new, time-dependent, Monte Carlo particle tracking code for reactor physics applications. G4-STORK was built by adapting and expanding on the Geant4 Monte Carlo toolkit. The toolkit provides the fundamental physics models and particle tracking algorithms that track each particle in space and time. It is a framework for further development (e.g. for projects such as G4-STORK). G4-STORK derives reactor physics parameters (e.g. k eff ) from the continuous evolution of a population of neutrons in space and time in the given simulation geometry. In this paper we detail the major additions to the Geant4 toolkit that were necessary to create G4-STORK. These include a renormalization process that maintains a manageable number of neutrons in the simulation even in very sub- or supercritical systems, scoring processes (e.g. recording fission locations, total neutrons produced and lost, etc.) that allow G4-STORK to calculate the reactor physics parameters, and dynamic simulation geometries that can change over the course of simulation to illicit reactor kinetics responses (e.g. fuel temperature reactivity feedback). The additions are verified through simple simulations and code-to-code comparisons with established reactor physics codes such as DRAGON and MCNP. Additionally, G4-STORK was developed to run a single simulation in parallel over many processors using MPI (Message Passing Interface) pipes

  11. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  12. Hermitian self-dual quasi-abelian codes

    Directory of Open Access Journals (Sweden)

    Herbert S. Palines

    2017-12-01

    Full Text Available Quasi-abelian codes constitute an important class of linear codes containing theoretically and practically interesting codes such as quasi-cyclic codes, abelian codes, and cyclic codes. In particular, the sub-class consisting of 1-generator quasi-abelian codes contains large families of good codes. Based on the well-known decomposition of quasi-abelian codes, the characterization and enumeration of Hermitian self-dual quasi-abelian codes are given. In the case of 1-generator quasi-abelian codes, we offer necessary and sufficient conditions for such codes to be Hermitian self-dual and give a formula for the number of these codes. In the case where the underlying groups are some $p$-groups, the actual number of resulting Hermitian self-dual quasi-abelian codes are determined.

  13. Code Shift: Grid Specifications and Dynamic Wind Turbine Models

    DEFF Research Database (Denmark)

    Ackermann, Thomas; Ellis, Abraham; Fortmann, Jens

    2013-01-01

    Grid codes (GCs) and dynamic wind turbine (WT) models are key tools to allow increasing renewable energy penetration without challenging security of supply. In this article, the state of the art and the further development of both tools are discussed, focusing on the European and North American e...

  14. Hominoid-specific de novo protein-coding genes originating from long non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Chen Xie

    2012-09-01

    Full Text Available Tinkering with pre-existing genes has long been known as a major way to create new genes. Recently, however, motherless protein-coding genes have been found to have emerged de novo from ancestral non-coding DNAs. How these genes originated is not well addressed to date. Here we identified 24 hominoid-specific de novo protein-coding genes with precise origination timing in vertebrate phylogeny. Strand-specific RNA-Seq analyses were performed in five rhesus macaque tissues (liver, prefrontal cortex, skeletal muscle, adipose, and testis, which were then integrated with public transcriptome data from human, chimpanzee, and rhesus macaque. On the basis of comparing the RNA expression profiles in the three species, we found that most of the hominoid-specific de novo protein-coding genes encoded polyadenylated non-coding RNAs in rhesus macaque or chimpanzee with a similar transcript structure and correlated tissue expression profile. According to the rule of parsimony, the majority of these hominoid-specific de novo protein-coding genes appear to have acquired a regulated transcript structure and expression profile before acquiring coding potential. Interestingly, although the expression profile was largely correlated, the coding genes in human often showed higher transcriptional abundance than their non-coding counterparts in rhesus macaque. The major findings we report in this manuscript are robust and insensitive to the parameters used in the identification and analysis of de novo genes. Our results suggest that at least a portion of long non-coding RNAs, especially those with active and regulated transcription, may serve as a birth pool for protein-coding genes, which are then further optimized at the transcriptional level.

  15. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...

  16. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  17. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    International Nuclear Information System (INIS)

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  18. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  19. MILSTAMP TACs: Military Standard Transportation and Movement Procedures Transportation Account Codes. Volume 2

    Science.gov (United States)

    1987-02-15

    82302 F 13211 PT VERDE WPB 82311 F 13212 PT SWIFT WPB 82312 E. 13214 PT THATCHER WPB 82314 E 13218 PT HERRON WPB 82318 C 13232 PT ROBERTS WPB 82332 E...Identifies DOT, FAA Logistica Center, OkIanhea City, as an organization to be billed. 4th Position Code A Ia assigned by DOT, rAA. Identifies appropriation

  20. Radioactive releases of nuclear power plants: the code ASTEC

    International Nuclear Information System (INIS)

    Sdouz, G.; Pachole, M.

    1999-11-01

    In order to adopt potential countermeasures to protect the population during the course of an accident in a nuclear power plant a fast prediction of the radiation exposure is necessary. The basic input value for such a dispersion calculation is the source term, which is the description of the physical and chemical behavior of the released radioactive nuclides. Based on a source term data base a pilot system has been developed to determine a relevant source term and to generate the input file for the dispersion code TAMOS of the Zentralanstalt fuer Meteorologie und Geodynamik (ZAMG). This file can be sent directly as an attachment of e-mail to the TAMOS user for further processing. The source terms for 56 European nuclear power plant units are included in the pilot version of the code ASTEC (Austrian Source Term Estimation Code). The use of the system is demonstrated in an example based on an accident in the unit TEMELIN-1. In order to calculate typical core inventories for the data bank the international computer code OBIGEN 2.1 was installed and applied. The report has been completed with a discussion on the optimal data transfer. (author)

  1. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  2. It takes two—coincidence coding within the dual olfactory pathway of the honeybee

    OpenAIRE

    Brill, Martin F.; Meyer, Anneke; Rössler, Wolfgang

    2015-01-01

    To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g., in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and...

  3. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  4. Exploring the concept of QR Code and the benefits of using QR Code for companies

    OpenAIRE

    Ji, Qianyu

    2014-01-01

    This research work concentrates on the concept of QR Code and the benefits of using QR Code for companies. The first objective of this research work is to study the general information of QR Code in order to guide people to understand the QR Code in detail. The second objective of this research work is to explore and analyze the essential and feasible technologies of QR Code for the sake of clearing the technologies of QR code. Additionally, this research work through QR Code best practices t...

  5. Coding training for medical students: How good is diagnoses coding with ICD-10 by novices?

    Directory of Open Access Journals (Sweden)

    Stausberg, Jürgen

    2005-04-01

    Full Text Available Teaching of knowledge and competence in documentation and coding is an essential part of medical education. Therefore, coding training had been placed within the course of epidemiology, medical biometry, and medical informatics. From this, we can draw conclusions about the quality of coding by novices. One hundred and eighteen students coded diagnoses from 15 nephrological cases in homework. In addition to interrater reliability, validity was calculated by comparison with a reference coding. On the level of terminal codes, 59.3% of the students' results were correct. The completeness was calculated as 58.0%. The results on the chapter level increased up to 91.5% and 87.7% respectively. For the calculation of reliability a new, simple measure was developed that leads to values of 0.46 on the level of terminal codes and 0.87 on the chapter level for interrater reliability. The figures of concordance with the reference coding are quite similar. In contrary, routine data show considerably lower results with 0.34 and 0.63 respectively. Interrater reliability and validity of coding by novices is as good as coding by experts. The missing advantage of experts could be explained by the workload of documentation and a negative attitude to coding on the one hand. On the other hand, coding in a DRG-system is handicapped by a large number of detailed coding rules, which do not end in uniform results but rather lead to wrong and random codes. Anyway, students left the course well prepared for coding.

  6. The Use of Code-Mixing among Pamonanese in Parata Ndaya Closed-Group Facebook

    Directory of Open Access Journals (Sweden)

    Joice Yulinda Luke

    2015-05-01

    Full Text Available Research intended to figure out why Pamonanes did code-mixing in Parata Ndaya, a Facebook closed-group site. The research applied qualitative method to get the types of code-mixing and reasons for doing code-mixing, while the analysis used Hoffman’s theory. Data were taken from comments of three active members of Parata Ndaya. Comments selected were mainly focused on political issues that happened during Regional House Representative Election in 2014. Data analysis reveals that code-mixing is mostly found in jokes and some comments about political leaders. Thus, the results can provide insights for Parata Ndaya members to build awareness on preserving their local language (i.e. Pamona language as well as to enhance solidarity among members of the group site.

  7. Turbo coding, turbo equalisation and space-time coding for transmission over fading channels

    CERN Document Server

    Hanzo, L; Yeap, B

    2002-01-01

    Against the backdrop of the emerging 3G wireless personal communications standards and broadband access network standard proposals, this volume covers a range of coding and transmission aspects for transmission over fading wireless channels. It presents the most important classic channel coding issues and also the exciting advances of the last decade, such as turbo coding, turbo equalisation and space-time coding. It endeavours to be the first book with explicit emphasis on channel coding for transmission over wireless channels. Divided into 4 parts: Part 1 - explains the necessary background for novices. It aims to be both an easy reading text book and a deep research monograph. Part 2 - provides detailed coverage of turbo conventional and turbo block coding considering the known decoding algorithms and their performance over Gaussian as well as narrowband and wideband fading channels. Part 3 - comprehensively discusses both space-time block and space-time trellis coding for the first time in literature. Par...

  8. Enrichment of Circular Code Motifs in the Genes of the Yeast Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Christian J. Michel

    2017-12-01

    Full Text Available A set X of 20 trinucleotides has been found to have the highest average occurrence in the reading frame, compared to the two shifted frames, of genes of bacteria, archaea, eukaryotes, plasmids and viruses. This set X has an interesting mathematical property, since X is a maximal C 3 self-complementary trinucleotide circular code. Furthermore, any motif obtained from this circular code X has the capacity to retrieve, maintain and synchronize the original (reading frame. Since 1996, the theory of circular codes in genes has mainly been developed by analysing the properties of the 20 trinucleotides of X , using combinatorics and statistical approaches. For the first time, we test this theory by analysing the X motifs, i.e., motifs from the circular code X , in the complete genome of the yeast Saccharomyces cerevisiae. Several properties of X motifs are identified by basic statistics (at the frequency level, and evaluated by comparison to R motifs, i.e., random motifs generated from 30 different random codes R . We first show that the frequency of X motifs is significantly greater than that of R motifs in the genome of S. cerevisiae. We then verify that no significant difference is observed between the frequencies of X and R motifs in the non-coding regions of S. cerevisiae, but that the occurrence number of X motifs is significantly higher than R motifs in the genes (protein-coding regions. This property is true for all cardinalities of X motifs (from 4 to 20 and for all 16 chromosomes. We further investigate the distribution of X motifs in the three frames of S. cerevisiae genes and show that they occur more frequently in the reading frame, regardless of their cardinality or their length. Finally, the ratio of X genes, i.e., genes with at least one X motif, to non- X genes, in the set of verified genes is significantly different to that observed in the set of putative or dubious genes with no experimental evidence. These results, taken together

  9. Enrichment of Circular Code Motifs in the Genes of the Yeast Saccharomyces cerevisiae.

    Science.gov (United States)

    Michel, Christian J; Ngoune, Viviane Nguefack; Poch, Olivier; Ripp, Raymond; Thompson, Julie D

    2017-12-03

    A set X of 20 trinucleotides has been found to have the highest average occurrence in the reading frame, compared to the two shifted frames, of genes of bacteria, archaea, eukaryotes, plasmids and viruses. This set X has an interesting mathematical property, since X is a maximal C3 self-complementary trinucleotide circular code. Furthermore, any motif obtained from this circular code X has the capacity to retrieve, maintain and synchronize the original (reading) frame. Since 1996, the theory of circular codes in genes has mainly been developed by analysing the properties of the 20 trinucleotides of X, using combinatorics and statistical approaches. For the first time, we test this theory by analysing the X motifs, i.e., motifs from the circular code X, in the complete genome of the yeast Saccharomyces cerevisiae . Several properties of X motifs are identified by basic statistics (at the frequency level), and evaluated by comparison to R motifs, i.e., random motifs generated from 30 different random codes R. We first show that the frequency of X motifs is significantly greater than that of R motifs in the genome of S. cerevisiae . We then verify that no significant difference is observed between the frequencies of X and R motifs in the non-coding regions of S. cerevisiae , but that the occurrence number of X motifs is significantly higher than R motifs in the genes (protein-coding regions). This property is true for all cardinalities of X motifs (from 4 to 20) and for all 16 chromosomes. We further investigate the distribution of X motifs in the three frames of S. cerevisiae genes and show that they occur more frequently in the reading frame, regardless of their cardinality or their length. Finally, the ratio of X genes, i.e., genes with at least one X motif, to non-X genes, in the set of verified genes is significantly different to that observed in the set of putative or dubious genes with no experimental evidence. These results, taken together, represent the first

  10. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  11. Proposing a Web-Based Tutorial System to Teach Malay Language Braille Code to the Sighted

    Science.gov (United States)

    Wah, Lee Lay; Keong, Foo Kok

    2010-01-01

    The "e-KodBrailleBM Tutorial System" is a web-based tutorial system which is specially designed to teach, facilitate and support the learning of Malay Language Braille Code to individuals who are sighted. The targeted group includes special education teachers, pre-service teachers, and parents. Learning Braille code involves memorisation…

  12. Code development and analysis program. RELAP4/MOD7 (Version 2): user's manual

    International Nuclear Information System (INIS)

    1978-08-01

    This manual describes RELAP4/MOD7 (Version 2), which is the latest version of the RELAP4 LPWR blowdown code. Version 2 is a precursor to the final version of RELAP4/MOD7, which will address LPWR LOCA analysis in integral fashion (i.e., blowdown, refill, and reflood in continuous fashion). This manual describes the new code models and provides application information required to utilize the code. It must be used in conjunction with the RELAP4/MOD5 User's Manual (ANCR-NUREG-1335, dated September 1976), and the RELAP4/MOD6 User's Manual

  13. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  14. Performance Measures of Diagnostic Codes for Detecting Opioid Overdose in the Emergency Department.

    Science.gov (United States)

    Rowe, Christopher; Vittinghoff, Eric; Santos, Glenn-Milo; Behar, Emily; Turner, Caitlin; Coffin, Phillip O

    2017-04-01

    Opioid overdose mortality has tripled in the United States since 2000 and opioids are responsible for more than half of all drug overdose deaths, which reached an all-time high in 2014. Opioid overdoses resulting in death, however, represent only a small fraction of all opioid overdose events and efforts to improve surveillance of this public health problem should include tracking nonfatal overdose events. International Classification of Disease (ICD) diagnosis codes, increasingly used for the surveillance of nonfatal drug overdose events, have not been rigorously assessed for validity in capturing overdose events. The present study aimed to validate the use of ICD, 9th revision, Clinical Modification (ICD-9-CM) codes in identifying opioid overdose events in the emergency department (ED) by examining multiple performance measures, including sensitivity and specificity. Data on ED visits from January 1, 2012, to December 31, 2014, including clinical determination of whether the visit constituted an opioid overdose event, were abstracted from electronic medical records for patients prescribed long-term opioids for pain from any of six safety net primary care clinics in San Francisco, California. Combinations of ICD-9-CM codes were validated in the detection of overdose events as determined by medical chart review. Both sensitivity and specificity of different combinations of ICD-9-CM codes were calculated. Unadjusted logistic regression models with robust standard errors and accounting for clustering by patient were used to explore whether overdose ED visits with certain characteristics were more or less likely to be assigned an opioid poisoning ICD-9-CM code by the documenting physician. Forty-four (1.4%) of 3,203 ED visits among 804 patients were determined to be opioid overdose events. Opioid-poisoning ICD-9-CM codes (E850.2-E850.2, 965.00-965.09) identified overdose ED visits with a sensitivity of 25.0% (95% confidence interval [CI] = 13.6% to 37.8%) and

  15. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  16. Nuclear data libraries for Tripoli-3.5 code

    International Nuclear Information System (INIS)

    Vergnaud, Th.

    2001-01-01

    The TRIPOLI-3 code uses multigroup nuclear data libraries generated using the NJOY-THEMIS suite of modules: for neutrons, they are produced from the ENDF/B-VI evaluations and cover the range between 20 MeV and 10 -5 eV, either in 315 groups and for one temperature, or in 3209 groups and for five temperatures; for gamma-rays, they are from JEF2 and are processed in groups between 14 MeV and keV. The probability tables used for the neutron transport calculations have been derived from the ENDF/B-VI evaluations using the CALENDF code. Cross sections for gamma production by neutron interaction (fission, capture or inelastic scattering) have been derived from ENDF/B-VI in 315 neutron groups and 75 gamma groups. The code also uses two response function libraries: for neutrons; based on several sources, in particular the dosimetry libraries IRDF/85 and IRDF/90; for gamma-rays it is based on the JEF2 evaluation and contains the kerma factors for all the elements and cross sections for all interactions. (author)

  17. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  18. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    Science.gov (United States)

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  19. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  20. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  1. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  2. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  3. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  4. Development of the three dimensional flow model in the SPACE code

    International Nuclear Information System (INIS)

    Oh, Myung Taek; Park, Chan Eok; Kim, Shin Whan

    2014-01-01

    SPACE (Safety and Performance Analysis CodE) is a nuclear plant safety analysis code, which has been developed in the Republic of Korea through a joint research between the Korean nuclear industry and research institutes. The SPACE code has been developed with multi-dimensional capabilities as a requirement of the next generation safety code. It allows users to more accurately model the multi-dimensional flow behavior that can be exhibited in components such as the core, lower plenum, upper plenum and downcomer region. Based on generalized models, the code can model any configuration or type of fluid system. All the geometric quantities of mesh are described in terms of cell volume, centroid, face area, and face center, so that it can naturally represent not only the one dimensional (1D) or three dimensional (3D) Cartesian system, but also the cylindrical mesh system. It is possible to simulate large and complex domains by modelling the complex parts with a 3D approach and the rest of the system with a 1D approach. By 1D/3D co-simulation, more realistic conditions and component models can be obtained, providing a deeper understanding of complex systems, and it is expected to overcome the shortcomings of 1D system codes. (author)

  5. Automated delivery of codes for charge in radiotherapy

    International Nuclear Information System (INIS)

    Sauer, Michael; Volz, Steffen; Hall, Markus; Roehner, Fred; Frommhold, Hermann; Grosu, Anca-Ligia; Heinemann, Felix

    2010-01-01

    Background and purpose: for the medical billing of Radiotherapy every fraction has to be encoded, including date and time of all administered treatments. With fractions averaging 30 per patient and about 2,500 new patients every year the number of Radiotherapy codes reaches an amount of 70,000 and more. Therefore, an automated proceeding for transferring and processing therapy codes has been developed at the Department of Radiotherapy Freiburg, Germany. This is a joint project of the Department of Radiotherapy, the Administration Department, and the Central II Department of the University Hospital of Freiburg. Material and methods: the project consists of several modules whose collaboration makes the projected automated transfer of treatment codes possible. The first step is to extract the data from the department's Clinical Information System (MOSAIQ). These data are transmitted to the Central IT Department via an HL7 interface, where a check for corresponding hospitalization data is performed. In the further processing of the data, a matching table plays an important role allowing the transformation of a treatment code into a valid medical billing code. In a last step, the data are transferred to the medical billing system. Results and conclusion: after assembling and implementing the particular modules successfully, a first beta test was launched. In order to test the modules separately as well as the interaction of the components, extensive tests were performed during March 2006. Soon it became clear that the tested procedure worked efficiently and accurately. In April 2006, a pilot project with a few qualities of treatment (e.g., computed tomography, simulation) was put into practice. Since October 2006, nearly all Radiation Therapy codes (∝ 75,000) are being transferred to the comprehensive Hospital Information System (HIS) automatically in a daily routine. (orig.)

  6. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  7. Ion implantation range and energy deposition codes COREL, RASE4, and DAMG2

    International Nuclear Information System (INIS)

    Brice, D.K.

    1977-07-01

    The FORTRAN codes COREL, RASE4 and DAMG2 can be used to calculate quantities associated with ion implantation range and energy deposition distributions within an amorphous target, or for ions incident far from low index directions and planes in crystalline targets. RASE4 calculates the projected range, R/sub p/, the root mean square spread in the projected range, ΔR/sub p/, and the root mean square spread of the distribution perpendicular to the projected range ΔR/sub perpendicular to/. These parameters are calculated as a function of incident ion energy, E, and the instantaneous energy of the ion, E'. They are sufficient to determine the three dimensional spatial distribution of the ions in the target in the Gaussian approximation when the depth distribution is independent of the lateral distribution. RASE4 can perform these calculations for targets having up to four different component atomic species. The code COREL is a short, economical version of RASE4 which calculates the range and straggling variables for E' = 0. Its primary use in the present package is to provide the average range and straggling variables for recoiling target atoms which are created by the incident ion. This information is used by RASE4 in calculating the redistribution of deposited energy by the target atom recoils. The code DAMG2 uses the output from RASE4 to calculate the depth distribution of energy deposition into either atomic processes or electronic processes. With other input DAMG2 can be used to calculate the depth distribution of any energy dependent interaction between the incident ions and target atoms. This report documents the basic theory behind COREL, RASE4 and DAMG2, including a description of codes, listings, and complete instructions for using the codes, and their limitations

  8. Independent validation testing of the FLAME computer code, Version 1.0

    International Nuclear Information System (INIS)

    Martian, P.; Chung, J.N.

    1992-07-01

    Independent testing of the FLAME computer code, Version 1.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Validation tests, (i.e., tests which compare field data to the computer generated solutions) were used to determine the operational status of the FLAME computer code and were done on a qualitative basis through graphical comparisons of the experimental and numerical data. These tests were specifically designed to check: (1) correctness of the FORTRAN coding, (2) computational accuracy, and (3) suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: (1) independent applications, and (2) graduated difficulty of test cases. Three tests ranging in complexity from simple one-dimensional steady-state flow field problems under near-saturated conditions to two-dimensional transient flow problems with very dry initial conditions

  9. Development of a parallelization strategy for the VARIANT code

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-01-01

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code's calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab

  10. Changes in the Coding and Non-coding Transcriptome and DNA Methylome that Define the Schwann Cell Repair Phenotype after Nerve Injury.

    Science.gov (United States)

    Arthur-Farraj, Peter J; Morgan, Claire C; Adamowicz, Martyna; Gomez-Sanchez, Jose A; Fazal, Shaline V; Beucher, Anthony; Razzaghi, Bonnie; Mirsky, Rhona; Jessen, Kristjan R; Aitman, Timothy J

    2017-09-12

    Repair Schwann cells play a critical role in orchestrating nerve repair after injury, but the cellular and molecular processes that generate them are poorly understood. Here, we perform a combined whole-genome, coding and non-coding RNA and CpG methylation study following nerve injury. We show that genes involved in the epithelial-mesenchymal transition are enriched in repair cells, and we identify several long non-coding RNAs in Schwann cells. We demonstrate that the AP-1 transcription factor C-JUN regulates the expression of certain micro RNAs in repair Schwann cells, in particular miR-21 and miR-34. Surprisingly, unlike during development, changes in CpG methylation are limited in injury, restricted to specific locations, such as enhancer regions of Schwann cell-specific genes (e.g., Nedd4l), and close to local enrichment of AP-1 motifs. These genetic and epigenomic changes broaden our mechanistic understanding of the formation of repair Schwann cell during peripheral nervous system tissue repair. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  12. Proceedings of the OECD/CSNI workshop on transient thermal-hydraulic and neutronic codes requirements

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, D.

    1997-07-01

    This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts` meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items to be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes.

  13. Proceedings of the OECD/CSNI workshop on transient thermal-hydraulic and neutronic codes requirements

    International Nuclear Information System (INIS)

    Ebert, D.

    1997-07-01

    This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts' meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items to be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes

  14. Coding for urologic office procedures.

    Science.gov (United States)

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. DIANA Code: Design and implementation of an analytic core calculus code by two group, two zone diffusion

    International Nuclear Information System (INIS)

    Mochi, Ignacio

    2005-01-01

    The principal parameters of nuclear reactors are determined in the conceptual design stage.For that purpose, it is necessary to have flexible calculation tools that represent the principal dependencies of such parameters.This capability is of critical importance in the design of innovative nuclear reactors.In order to have a proper tool that could assist the conceptual design of innovative nuclear reactors, we developed and implemented a neutronic core calculus code: DIANA (Diffusion Integral Analytic Neutron Analysis).To calculate the required parameters, this code generates its own cross sections using an analytic two group, two zones diffusion scheme based only on a minimal set of data (i.e. 2200 m/s and fission averaged microscopic cross sections, Wescott factors and Effective Resonance Integrals).Both to calculate cross sections and core parameters, DIANA takes into account heterogeneity effects that are included when it evaluates each zone.Among them lays the disadvantage factor of each energy group.DIANA was totally implemented through Object Oriented Programming using C++ language. This eases source code understanding and would allow a quick expansion of its capabilities if needed.The final product is a versatile and easy-to-use code that allows core calculations with a minimal amount of data.It also contains the required tools needed to perform many variational calculations such as the parameterisation of effective multiplication factors for different radii of the core.The diffusion scheme s simplicity allows an easy following of the involved phenomena, making DIANA the most suitable tool to design reactors whose physics lays beyond the parameters of present reactors.All this reasons make DIANA a good candidate for future innovative reactor analysis

  16. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  17. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  18. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  19. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  20. Vectorization, parallelization and porting of nuclear codes (vectorization and parallelization). Progress report fiscal 1998

    International Nuclear Information System (INIS)

    Ishizuki, Shigeru; Kawai, Wataru; Nemoto, Toshiyuki; Ogasawara, Shinobu; Kume, Etsuo; Adachi, Masaaki; Kawasaki, Nobuo; Yatake, Yo-ichi

    2000-03-01

    Several computer codes in the nuclear field have been vectorized, parallelized and transported on the FUJITSU VPP500 system, the AP3000 system and the Paragon system at Center for Promotion of Computational Science and Engineering in Japan Atomic Energy Research Institute. We dealt with 12 codes in fiscal 1998. These results are reported in 3 parts, i.e., the vectorization and parallelization on vector processors part, the parallelization on scalar processors part and the porting part. In this report, we describe the vectorization and parallelization on vector processors. In this vectorization and parallelization on vector processors part, the vectorization of General Tokamak Circuit Simulation Program code GTCSP, the vectorization and parallelization of Molecular Dynamics NTV (n-particle, Temperature and Velocity) Simulation code MSP2, Eddy Current Analysis code EDDYCAL, Thermal Analysis Code for Test of Passive Cooling System by HENDEL T2 code THANPACST2 and MHD Equilibrium code SELENEJ on the VPP500 are described. In the parallelization on scalar processors part, the parallelization of Monte Carlo N-Particle Transport code MCNP4B2, Plasma Hydrodynamics code using Cubic Interpolated Propagation Method PHCIP and Vectorized Monte Carlo code (continuous energy model / multi-group model) MVP/GMVP on the Paragon are described. In the porting part, the porting of Monte Carlo N-Particle Transport code MCNP4B2 and Reactor Safety Analysis code RELAP5 on the AP3000 are described. (author)

  1. Coronal mass ejection hits mercury: A.I.K.E.F. hybrid-code results compared to MESSENGER data

    Science.gov (United States)

    Exner, W.; Heyner, D.; Liuzzo, L.; Motschmann, U.; Shiota, D.; Kusano, K.; Shibayama, T.

    2018-04-01

    Mercury is the closest orbiting planet around the sun and is therefore embedded in an intensive and highly varying solar wind. In-situ data from the MESSENGER spacecraft of the plasma environment near Mercury indicates that a coronal mass ejection (CME) passed the planet on 23 November 2011 over the span of the 12 h MESSENGER orbit. Slavin et al. (2014) derived the upstream parameters of the solar wind at the time of that orbit, and were able to explain the observed MESSENGER data in the cusp and magnetopause segments of MESSENGER's trajectory. These upstream parameters will be used for our first simulation run. We use the hybrid code A.I.K.E.F. which treats ions as individual particles and electrons as a mass-less fluid, to conduct hybrid simulations of Mercury's magnetospheric response to the impact of the CME on ion gyro time scales. Results from the simulation are in agreement with magnetic field measurements from the inner day-side magnetosphere and the bow-shock region. However, at the planet's nightside, Mercury's plasma environment seemed to be governed by different solar wind conditions, in conclusion, Mercury's interaction with the CME is not sufficiently describable by only one set of upstream parameters. Therefore, to simulate the magnetospheric response while MESSENGER was located in the tail region, we use parameters obtained from the MHD solar wind simulation code SUSANOO (Shiota et al. (2014)) for our second simulation run. The parameters of the SUSANOO model achieve a good agreement of the data concerning the plasma tail crossing and the night-side approach to Mercury. However, the polar and closest approach are hardly described by both upstream parameters, namely, neither upstream dataset is able to reproduce the MESSENGER crossing of Mercury's magnetospheric cusp. We conclude that the respective CME was too variable on the timescale of the MESSENGER orbit to be described by only two sets of upstream conditions. Our results suggest locally strong

  2. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  3. Modification in the CITATION computer code: change of microscopic cross sections by zone

    International Nuclear Information System (INIS)

    Yamaguchi, M.; Kosaka, N.

    1983-01-01

    Some modifications done in the CITATION computer code are presented, aiming to calculate the accumulated burnup for each reactor zone in each step of burnup and allow changing the microscopic cross sections for each zone in accordance to the burnup accumulated after each step of burnup. Some input data were put in the computer code. The alterations were tested and the results were compared with and without modifications. (E.G.) [pt

  4. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  5. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  6. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  7. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  8. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  9. ISPOR Code of Ethics 2017 (4th Edition).

    Science.gov (United States)

    Santos, Jessica; Palumbo, Francis; Molsen-David, Elizabeth; Willke, Richard J; Binder, Louise; Drummond, Michael; Ho, Anita; Marder, William D; Parmenter, Louise; Sandhu, Gurmit; Shafie, Asrul A; Thompson, David

    2017-12-01

    As the leading health economics and outcomes research (HEOR) professional society, ISPOR has a responsibility to establish a uniform, harmonized international code for ethical conduct. ISPOR has updated its 2008 Code of Ethics to reflect the current research environment. This code addresses what is acceptable and unacceptable in research, from inception to the dissemination of its results. There are nine chapters: 1 - Introduction; 2 - Ethical Principles respect, beneficence and justice with reference to a non-exhaustive compilation of international, regional, and country-specific guidelines and standards; 3 - Scope HEOR definitions and how HEOR and the Code relate to other research fields; 4 - Research Design Considerations primary and secondary data related issues, e.g., participant recruitment, population and research setting, sample size/site selection, incentive/honorarium, administration databases, registration of retrospective observational studies and modeling studies; 5 - Data Considerations privacy and data protection, combining, verification and transparency of research data, scientific misconduct, etc.; 6 - Sponsorship and Relationships with Others (roles of researchers, sponsors, key opinion leaders and advisory board members, research participants and institutional review boards (IRBs) / independent ethics committees (IECs) approval and responsibilities); 7 - Patient Centricity and Patient Engagement new addition, with explanation and guidance; 8 - Publication and Dissemination; and 9 - Conclusion and Limitations. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Changing priorities of codes and standards: An A/E's perspective for operating units and new generation

    International Nuclear Information System (INIS)

    Meyers, B.L.; Jackson, R.W.; Morowski, B.D.

    1994-01-01

    As the nuclear power industry has shifted emphasis from the construction of new plants to the reliability and maintenance of operating units, the industry's commitment to safety has been well guarded and maintained. Many other important indicators of nuclear industry performance are also positive. Unfortunately, by some projections, as many as 25 operating nuclear units could prematurely shutdown because of increasing O ampersand M and total operating costs. The immediate impact of higher generating costs on the nuclear industry is evident. However, when viewed over the longer-term, high generating costs will also affect license renewals, progress in the development of advanced light water reactor designs and prospects for a return to the building of new plants. Today's challenge is to leverage the expertise and contribution of the nuclear industry partner organizations to steadily improve the work processes and methods necessary to reduce operating costs, to achieve higher levels in the performance of operating units, and to maintain high standards of technical excellence and safety. From the experience and perspective of an A/E and partner in the nuclear industry, this paper will discuss the changing priorities of codes and standards as they relate to opportunities for the communication of lessons learned and improving the responsiveness to industry needs

  11. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Demonstration of Vibrational Braille Code Display Using Large Displacement Micro-Electro-Mechanical Systems Actuators

    Science.gov (United States)

    Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa

    2012-06-01

    In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.

  13. An evaluation and analysis of three dynamic watershed acidification codes (MAGIC, ETD, and ILWAS)

    Energy Technology Data Exchange (ETDEWEB)

    Jenne, E.A.; Eary, L.E.; Vail, L.W.; Girvin, D.C.; Liebetrau, A.M.; Hibler, L.F.; Miley, T.B.; Monsour, M.J.

    1989-01-01

    The US Environmental Protection Agency is currently using the dynamic watershed acidification codes MAGIC, ILWAS, and ETD to assess the potential future impact of the acidic deposition on surface water quality by simulating watershed acid neutralization processes. The reliability of forecasts made with these codes is of considerable concern. The present study evaluates the process formulations (i.e., conceptual and numerical representation of atmospheric, hydrologic geochemical and biogeochemical processes), compares their approaches to calculating acid neutralizing capacity (ANC), and estimates the relative effects (sensitivity) of perturbations in the input data on selected output variables for each code. Input data were drawn from three Adirondack (upstate New York) watersheds: Panther Lake, Clear Pond, and Woods Lake. Code calibration was performed by the developers of the codes. Conclusions focus on summarizing the adequacy of process formulations, differences in ANC simulation among codes and recommendations for further research to increase forecast reliability. 87 refs., 11 figs., 77 tabs.

  14. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  15. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  16. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  17. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  18. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  19. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  20. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed