WorldWideScience

Sample records for a codes

  1. Safety Code A12

    CERN Multimedia

    SC Secretariat

    2005-01-01

    Please note that the Safety Code A12 (Code A12) entitled "THE SAFETY COMMISSION (SC)" is available on the web at the following url: https://edms.cern.ch/document/479423/LAST_RELEASED Paper copies can also be obtained from the SC Unit Secretariat, e-mail: sc.secretariat@cern.ch SC Secretariat

  2. READING A NEURAL CODE

    NARCIS (Netherlands)

    BIALEK, W; RIEKE, F; VANSTEVENINCK, RRD; WARLAND, D

    1991-01-01

    Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from

  3. Three Paradigms for Mixing Coding and Games: Coding in a Game, Coding as a Game, and Coding for a Game

    OpenAIRE

    2015-01-01

    Games for teaching coding have been an educational holy grail since at least the early 1980s. Yet for decades, with games more popular than ever and with the need to teach kids coding having been well-recognized, no blockbuster coding games have arisen (see Chapter 2). Over the years, the research community has made various games for teaching computer science: a survey made by shows that most do not teach coding, and of the ones that do teach coding, most are research prototypes (not produc...

  4. A class of constacyclic BCH codes and new quantum codes

    Science.gov (United States)

    liu, Yang; Li, Ruihu; Lv, Liangdong; Ma, Yuena

    2017-03-01

    Constacyclic BCH codes have been widely studied in the literature and have been used to construct quantum codes in latest years. However, for the class of quantum codes of length n=q^{2m}+1 over F_{q^2} with q an odd prime power, there are only the ones of distance δ ≤ 2q^2 are obtained in the literature. In this paper, by a detailed analysis of properties of q2-ary cyclotomic cosets, maximum designed distance δ _{max} of a class of Hermitian dual-containing constacyclic BCH codes with length n=q^{2m}+1 are determined, this class of constacyclic codes has some characteristic analog to that of primitive BCH codes over F_{q^2}. Then we can obtain a sequence of dual-containing constacyclic codes of designed distances 2q^2 2q^2 can be constructed from these dual-containing codes via Hermitian Construction. These newly obtained quantum codes have better code rate compared with those constructed from primitive BCH codes.

  5. Revised Safety Code A2

    CERN Multimedia

    SC Secretariat

    2005-01-01

    Please note that the revised Safety Code A2 (Code A2 rev.) entitled "REPORTING OF ACCIDENTS AND NEAR MISSES" is available on the web at the following url: https://edms.cern.ch/document/335502/LAST_RELEASED Paper copies can also be obtained from the SC Unit Secretariat, e-mail: sc.secretariat@cern.ch SC Secretariat

  6. Requirements of a Better Secure Program Coding

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2012-01-01

    Full Text Available Secure program coding refers to how manage the risks determined by the security breaches because of the program source code. The papers reviews the best practices must be doing during the software development life cycle for secure software assurance, the methods and techniques used for a secure coding assurance, the most known and common vulnerabilities determined by a bad coding process and how the security risks are managed and mitigated. As a tool of the better secure program coding, the code review process is presented, together with objective measures for code review assurance and estimation of the effort for the code improvement.

  7. HADES, A Radiographic Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  8. A Pixel Domain Video Coding based on Turbo code and Arithmetic code

    Directory of Open Access Journals (Sweden)

    Cyrine Lahsini

    2012-05-01

    Full Text Available In recent years, with emerging applications such as multimedia sensors networks, wirelesslow-power surveillance and mobile camera phones, the traditional video coding architecture in beingchallenged. In fact, these applications have different requirements than those of the broadcast videodelivery systems: a low power consumption at the encoder side is essential.In this context, we propose a pixel-domain video coding scheme which fits well in these senarios.In this system, both the arithmetic and turbo codes are used to encode the video sequence's frames.Simulations results show significant gains over Pixel-domain Wyner-Ziv video codeingr.

  9. A new parallel TreeSPH code

    OpenAIRE

    Lia, Cesario; Carraro, Giovanni; Chiosi, Cesare; Voli, Marco

    1998-01-01

    In this report we describe a parallel implementation of a Tree-SPH code realized using the SHMEM libraries in the Cray T3E supercomputer at CINECA. We show the result of a 3D test to check the code performances against its scalar version. Finally we compare the load balancing and scalability of the code with PTreeSPH (Dav\\'e et al 1997), the only other parallel Tree-SPH code present in the literature.

  10. A NOVEL VARIABLE-LENGTH CODE FOR ROBUST VIDEO CODING

    Institute of Scientific and Technical Information of China (English)

    Ma Linhua; Chang Yilin

    2006-01-01

    A novel Variable-Length Code (VLC), called Alternate VLC (AVLC), is proposed in this letter,which employs two types of VLC to encode source symbols alternately. Its advantage is that it can not only stop the symbol error propagation effect, but also correct symbol insertion errors and avoid symbol deletion errors, so the original sequence number of symbols can be kept correctly, which is very important in video communication.

  11. Code Calibration as a Decision Problem

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, M. H.

    1993-01-01

    Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....

  12. A Better Handoff for Code Officials

    Energy Technology Data Exchange (ETDEWEB)

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  13. A Mobile Application Prototype using Network Coding

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank

    2010-01-01

    This paper looks into implementation details of network coding for a mobile application running on commercial mobile phones. We describe the necessary coding operations and algorithms that implements them. The coding algorithms forms the basis for a implementation in C++ and Symbian C++. We report...

  14. A Faculty Code is not a Coda

    Science.gov (United States)

    O'Neil, Robert M.

    1974-01-01

    Many college and university faculties have adopted codes of faculty responsibilities and self-regulation. Firsthand advice on creating a code precedes an example of one: the new University of California Policy on Faculty Conduct and the Administration of Discipline. (Editor/PG)

  15. On Construction of Optimal A2-Codes

    Institute of Scientific and Technical Information of China (English)

    HU Lei

    2001-01-01

    Two authentication codes with arbitration (A2-codes) are constrructed from finite affine spaces to illustrate for the first time that the information-theoretic lower bounds for A2-codes can be strictly tighter than the combinatorial ones. The codes also illustrate that the conditional combinatorial lower bounds on numbers of encoding\\ decoding rules are not genuine ones. As an analogue of 3-dimensional case, an A2-code from 4-dimensional finite projective spaces is constructed, which neets both the information-theoretic and combinatorial lower bounds.

  16. A Multidimensional Code For Isothermal Magnetohydrodynamic Flows

    CERN Document Server

    Kim, J; Jones, T W; Hong, S S; Kim, Jongsoo; Ryu, Dongsu

    1999-01-01

    We present a multi-dimensional numerical code to solve isothermal magnetohydrodynamic (IMHD) equations for use in modeling astrophysical flows. First, we have built a one-dimensional code which is based on an explicit finite-difference method on an Eulerian grid, called the total variation diminishing (TVD) scheme. Recipes for building the one-dimensional IMHD code, including the normalized right and left eigenvectors of the IMHD Jacobian matrix, are presented. Then, we have extended the one-dimensional code to a multi-dimensional IMHD code through a Strang-type dimensional splitting. In the multi-dimensional code, an explicit cleaning step has been included to eliminate non-zero $\

  17. A note on Type II covolutional codes

    OpenAIRE

    Johannesson, Rolf; Ståhl, Per; Wittenmark, Emma

    2000-01-01

    The result of a search for the world's second type II (doubly-even and self-dual) convolutional code is reported. A rate R=4/8, 16-state, time-invariant, convolutional code with free distance dfree=8 was found to be type II. The initial part of its weight spectrum is better than that of the Golay convolutional code (GCC). Generator matrices and path weight enumerators for some other type II convolutional codes are given. By the “wrap-around” technique tail-biting versions of (32, 18, 8) T...

  18. JPIC & How to make a PIC code

    CERN Document Server

    Wu, Hui-Chun

    2011-01-01

    Author developed the parallel fully kinetic particle-in-cell (PIC) code JPIC based on updated and advanced algorithms (e.g. numerical-dispersion-free electromagnetic field solver) for simulating laser plasma interactions. Basic technical points and hints of PIC programming and parallel programming by message passing interface (MPI) are reviewed. Most of contents come from Author's notes when writing up JPIC and experiences when using the code to solve different problems. Enough "how-to-do-it" information should help a new beginner to effectively build up his/her own PIC code. General advices on how to use a PIC code are also given.

  19. Network Coding in a Multicast Switch

    CERN Document Server

    Kim, MinJi; Medard, Muriel; Eryilmaz, Atilla; Koetter, Ralf

    2008-01-01

    The problem of serving multicast flows in a crossbar switch is considered. Intra-flow linear network coding is shown to achieve a larger rate region than the case without coding. A traffic pattern is presented which is achievable with coding but requires a switch speedup when coding is not allowed. The rate region with coding can be characterized in a simple graph-theoretic manner, in terms of the stable set polytope of the "enhanced conflict graph". No such graph-theoretic characterization is known for the case of fanout splitting without coding. The minimum speedup needed to achieve 100% throughput with coding is shown to be upper bounded by the imperfection ratio of the enhanced conflict graph. When applied to KxN switches with unicasts and broadcasts only, this gives a bound of min{(2K-1)/K,2N/(N+1)} on the speedup. This shows that speedup, which is usually implemented in hardware, can often be substituted by network coding, which can be done in software. Computing an offline schedule (using prior knowled...

  20. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, Harm S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  1. Source Code Plagiarism--A Student Perspective

    Science.gov (United States)

    Joy, M.; Cosma, G.; Yau, J. Y.-K.; Sinclair, J.

    2011-01-01

    This paper considers the problem of source code plagiarism by students within the computing disciplines and reports the results of a survey of students in Computing departments in 18 institutions in the U.K. This survey was designed to investigate how well students understand the concept of source code plagiarism and to discover what, if any,…

  2. The Nuremberg Code-A critique

    Directory of Open Access Journals (Sweden)

    Ravindra B Ghooi

    2011-01-01

    Full Text Available The Nuremberg Code drafted at the end of the Doctor′s trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics.

  3. The Nuremberg Code-A critique.

    Science.gov (United States)

    Ghooi, Ravindra B

    2011-04-01

    The Nuremberg Code drafted at the end of the Doctor's trial in Nuremberg 1947 has been hailed as a landmark document in medical and research ethics. Close examination of this code reveals that it was based on the Guidelines for Human Experimentation of 1931. The resemblance between these documents is uncanny. It is unfortunate that the authors of the Nuremberg Code passed it off as their original work. There is evidence that the defendants at the trial did request that their actions be judged on the basis of the 1931 Guidelines, in force in Germany. The prosecutors, however, ignored the request and tried the defendants for crimes against humanity, and the judges included the Nuremberg Code as a part of the judgment. Six of ten principles in Nuremberg Code are derived from the 1931 Guidelines, and two of four newly inserted principles are open to misinterpretation. There is little doubt that the Code was prepared after studying the Guidelines, but no reference was made to the Guidelines, for reasons that are not known. Using the Guidelines as a base document without giving due credit is plagiarism; as per our understanding of ethics today, this would be considered unethical. The Nuremberg Code has fallen by the wayside; since unlike the Declaration of Helsinki, it is not regularly reviewed and updated. The regular updating of some ethics codes is evidence of the evolving nature of human ethics.

  4. Universality and individuality in a neural code

    CERN Document Server

    Schneidman, E; Tishby, N; De Ruyter van Steveninck, R R; Bialek, W; Schneidman, Elad; Brenner, Naama; Tishby, Naftali; De Ruyter van Steveninck, Rob R.; Bialek, William

    2000-01-01

    The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality.

  5. A thesaurus for a neural population code.

    Science.gov (United States)

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-09-08

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns.

  6. Secure Source Coding with a Helper

    CERN Document Server

    Tandon, Ravi; Ramchandran, Kannan

    2009-01-01

    We consider a secure lossless source coding problem with a rate-limited helper. In particular, Alice observes an i.i.d. source $X^{n}$ and wishes to transmit this source losslessly to Bob at a rate $R_{x}$. A helper, say Helen, observes a correlated source $Y^{n}$ and transmits at a rate $R_{y}$ to Bob. A passive eavesdropper can observe the coded output of Alice. The equivocation $\\Delta$ is measured by the conditional entropy $H(X^{n}|J_{x})/n$, where $J_{x}$ is the coded output of Alice. We first completely characterize the rate-equivocation region for this secure source coding model, where we show that Slepian-Wolf type coding is optimal. We next study two generalizations of this model and provide single-letter characterizations for the respective rate-equivocation regions. In particular, we first consider the case of a two-sided helper where Alice also has access to the coded output of Helen. We show that for this case, Slepian-Wolf type coding is suboptimal and one can further decrease the information l...

  7. A Fortran 90 code for magnetohydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  8. Bonsai: A GPU Tree-Code

    CERN Document Server

    Bédorf, Jeroen; Zwart, Simon Portegies

    2012-01-01

    We present a gravitational hierarchical N-body code that is designed to run efficiently on Graphics Processing Units (GPUs). All parts of the algorithm are executed on the GPU which eliminates the need for data transfer between the Central Processing Unit (CPU) and the GPU. Our tests indicate that the gravitational tree-code outperforms tuned CPU code for all parts of the algorithm and show an overall performance improvement of more than a factor 20, resulting in a processing rate of more than 2.8 million particles per second.

  9. A Fast Fractal Image Compression Coding Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  10. A Subband Coding Method for HDTV

    Science.gov (United States)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  11. A quantum analog of Huffman coding

    CERN Document Server

    Braunstein, S L; Gottesman, D; Lo, H K; Braunstein, Samuel L.; Fuchs, Christopher A.; Gottesman, Daniel; Lo, Hoi-Kwong

    1998-01-01

    We analyse a generalization of Huffman coding to the quantum case. In particular, we notice various difficulties in using instantaneous codes for quantum communication. However, for the storage of quantum information, we have succeeded in constructing a Huffman-coding inspired quantum scheme. The number of computational steps in the encoding and decoding processes of N quantum signals can be made to be polynomial in log N by a massively parallel implementation of a quantum gate array. This is to be compared with the N^3 computational steps required in the sequential implementation by Cleve and DiVincenzo of the well-known quantum noiseless block coding scheme by Schumacher. The powers and limitations in using this scheme in communication are also discussed.

  12. On a Mathematical Theory of Coded Exposure

    Science.gov (United States)

    2014-08-01

    Coded exposure, computational photography , flutter shutter, motion blur, mean square error (MSE), signal to noise ratio (SNR). 1 Introduction Since the...to be a magic tool that should equip all cameras. However, to the best of our knowledge, little is known on the coded exposure method from a rigorous...and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision

  13. A Simple and a Retargetable Code Generator for TCGS

    NARCIS (Netherlands)

    Ruys, T.C.

    1995-01-01

    The Twente Compiler Generator System (TCGS) is a parser-generator system which is typically used to generate a compiler that, given an input program, generates abstract stack code. A code generator for TCGS translates this stack code generated by a TCGS compiler to assembler code for a particular ta

  14. A comparison of cosmological hydrodynamic codes

    Science.gov (United States)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic

  15. CODE-MIXING AND CODE-SWITCHING OF INDONESIAN CELEBRITIES: A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Nana Yuliana

    2015-05-01

    Full Text Available Foreign language skill presents a language variety called code-mixing and code-switching. The purpose of this study was to get some information to identify the types of code mixing and code switching frequently used by Indonesian celebrities. The study was divided into two groups. Group I was inclusive of the celebrities with native speakers parents and Group II comprised celebrities capable of speaking two or more languages. The qualitative and quantitative methods were used to analyze the code mixing and code switching with different frequency. It can be concluded that Group II use code-mixing and code-switching with a different frequency and speak foreign language more active.

  16. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    Science.gov (United States)

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  17. TEA: A Code Calculating Thermochemical Equilibrium Abundances

    Science.gov (United States)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  18. Finding the Key to a Better Code: Code Team Restructure to Improve Performance and Outcomes

    OpenAIRE

    Prince, Cynthia R.; Hines, Elizabeth J.; Chyou, Po-Huang; Heegeman, David J.

    2014-01-01

    Code teams respond to acute life threatening changes in a patient’s status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team’s resuscitation could be hindered. To improve the overall performance of our hospital’s code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team parti...

  19. A diffuser heat transfer and erosion code

    Science.gov (United States)

    Buzzard, G. H.

    1985-10-01

    A computer code for diffuser heat transfer and erosion analysis (DHTE) has been developed which improves upon the earlier Rocket Engine Diffuser Thermal Analysis Program (REDTAP). Improvements contained within DHTE include provision for a radial temperature gradient within the diffuser wall, an improved model for the particle impingement accommodation coefficient, a model for particle debris shielding, and a model for wall erosion by particle impact. DHTE differs from an earlier diffuser heat transfer code (DHT) to the extent that it incorporates a simple erosion model and utilizes a more recent diffuser version of the JANNAF Standardized Plume Flow Field Model (SCP2ND). The 77-inch diffuser was instrumented to record the water side wall temperature and water jacket temperature at selected sites along the initial seven feet of the diffuser during routine test firings. Data is presented that supports the predictions of DHTE but is inadequate to validate the code.

  20. A modified TreePM code

    Institute of Scientific and Technical Information of China (English)

    Nishikanta Khandai; J. S. Bagla

    2009-01-01

    We discuss the performance characteristics of using the modification of the tree code suggested by Barnes in the context of the TreePM code. The optimization involves identifying groups of particles and using only one tree walk to compute the force for all the particles in the group. This modification has been in use in our implementation of the TreePM code for some time, and has also been used by others in codes that make use of tree structures. We present the first detailed study of the performance characteristics of this optimization. We show that the modification, if tuned properly, can speed up the TreePM code by a significant amount. We also combine this modification with the use of individual time steps and indicate how to combine these two schemes in an optimal fashion. We find that the combination is at least a factor of two faster than the modified TreePM without individual time steps. Overall performance is often faster by a larger factor because the scheme for the groups optimizes the use of cache for large simulations.

  1. FREEFALL: A seabed penetrator flight code

    Energy Technology Data Exchange (ETDEWEB)

    Hickerson, J.

    1988-01-01

    This report presents a one-dimensional model and computer program for predicting the motion of seabed penetrators. The program calculates the acceleration, velocity, and depth of a penetrator as a function of time from the moment of launch until the vehicle comes to rest in the sediment. The code is written in Pascal language for use on a small personal computer. Results are presented as printed tables and graphs. A comparison with experimental data is given which indicates that the accuracy of the code is perhaps as good as current techniques for measuring vehicle performance. 31 refs., 12 figs., 5 tabs.

  2. A Denotational Semantics for Communicating Unstructured Code

    Directory of Open Access Journals (Sweden)

    Nils Jähnig

    2015-03-01

    Full Text Available An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.

  3. The natural variation of a neural code.

    Science.gov (United States)

    Kfir, Yoav; Renan, Ittai; Schneidman, Elad; Segev, Ronen

    2012-01-01

    The way information is represented by sequences of action potentials of spiking neurons is determined by the input each neuron receives, but also by its biophysics, and the specifics of the circuit in which it is embedded. Even the "code" of identified neurons can vary considerably from individual to individual. Here we compared the neural codes of the identified H1 neuron in the visual systems of two families of flies, blow flies and flesh flies, and explored the effect of the sensory environment that the flies were exposed to during development on the H1 code. We found that the two families differed considerably in the temporal structure of the code, its content and energetic efficiency, as well as the temporal delay of neural response. The differences in the environmental conditions during the flies' development had no significant effect. Our results may thus reflect an instance of a family-specific design of the neural code. They may also suggest that individual variability in information processing by this specific neuron, in terms of both form and content, is regulated genetically.

  4. The code APOLLO. A general description

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, A.

    1971-01-15

    The code APOLLO, written in Saclay at the Service de Physique Mathematique, makes it possible to calculate the space and energy dependent direct or adjoint flux for a one dimensional medium, by the solution of the integral form of the transport equation, in the multigroup approximation. In particular, the properties of a reactor cell and of a group of interacting cells can be obtained with APOLLO. The code can be used in plane, cylindrical or sperical geometries. The fluxes can be calculated with the following approximations: isotropic collision, transprot correction, and linearly anisotropic collision (B{sub 1} method).

  5. Homogenous Chaotic Network Serving as a Rate/Population Code to Temporal Code Converter

    Directory of Open Access Journals (Sweden)

    Mikhail V. Kiselev

    2014-01-01

    Full Text Available At present, it is obvious that different sections of nervous system utilize different methods for information coding. Primary afferent signals in most cases are represented in form of spike trains using a combination of rate coding and population coding while there are clear evidences that temporal coding is used in various regions of cortex. In the present paper, it is shown that conversion between these two coding schemes can be performed under certain conditions by a homogenous chaotic neural network. Interestingly, this effect can be achieved without network training and synaptic plasticity.

  6. BTREE: A FORTRAN Code for B+ Tree.

    Science.gov (United States)

    2014-09-26

    AD- 55 026 STREE:SA FORTRAN CODE FOR 0- TREE IUT NAVAL SURFACE 1f, WOEAPONS CENTER SI LVER SPRING WD E WINSION 01 APR 05 NSWC/TR 85-5 F/0 9/2 NL...and Subtitle) S. TYPE OF REPORT & PERIOD COVERED BTREE: A FORTRAN CODE FOR A B+ TREE Final: Fiscal Year 85 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR...reveres side It necessary d identify by block number) B+ Tree , Database Manager, Node, Leaf. Root 20. ABSTRACT (ContUum an terees side It ncee.sy al

  7. The politics of a European civil code

    NARCIS (Netherlands)

    Hesselink, M.W.

    2004-01-01

    Last year the European Commission published its Action Plan on European contract law. That plan forms an important step towards a European Civil Code. In its Plan, the Commission tries to depoliticise the codification process by asking a group of academic experts to prepare what it calls a 'common f

  8. Direction coding using a tactile chair

    NARCIS (Netherlands)

    Vries, S.C. de; Erp, J.B.F. van; Kiefer, R.J.

    2009-01-01

    This laboratory study examined the possibility of using a car seat instrumented with tactile display elements (tactors) to communicate directional information to a driver. A car seat fitted with an 8 by 8 matrix of tactors embedded in the seat pan was used to code eight different directions.Localiza

  9. Finding the key to a better code: code team restructure to improve performance and outcomes.

    Science.gov (United States)

    Prince, Cynthia R; Hines, Elizabeth J; Chyou, Po-Huang; Heegeman, David J

    2014-09-01

    Code teams respond to acute life threatening changes in a patient's status 24 hours a day, 7 days a week. If any variable, whether a medical skill or non-medical quality, is lacking, the effectiveness of a code team's resuscitation could be hindered. To improve the overall performance of our hospital's code team, we implemented an evidence-based quality improvement restructuring plan. The code team restructure, which occurred over a 3-month period, included a defined number of code team participants, clear identification of team members and their primary responsibilities and position relative to the patient, and initiation of team training events and surprise mock codes (simulations). Team member assessments of the restructured code team and its performance were collected through self-administered electronic questionnaires. Time-to-defibrillation, defined as the time the code was called until the start of defibrillation, was measured for each code using actual time recordings from code summary sheets. Significant improvements in team member confidence in the skills specific to their role and clarity in their role's position were identified. Smaller improvements were seen in team leadership and reduction in the amount of extra talking and noise during a code. The average time-to-defibrillation during real codes decreased each year since the code team restructure. This type of code team restructure resulted in improvements in several areas that impact the functioning of the team, as well as decreased the average time-to-defibrillation, making it beneficial to many, including the team members, medical institution, and patients.

  10. A Code of Ethics for Democratic Leadership

    Science.gov (United States)

    Molina, Ricardo; Klinker, JoAnn Franklin

    2012-01-01

    Democratic leadership rests on sacred values, awareness, judgement, motivation and courage. Four turning points in a 38-year school administrator's career revealed decision-making in problematic moments stemmed from values in a personal and professional code of ethics. Reflection on practice and theory added vocabulary and understanding to make…

  11. Code Mixing in a Young Bilingual Child.

    Science.gov (United States)

    Anderson, Raquel; Brice, Alejandro

    1999-01-01

    Spontaneous speech samples of a bilingual Spanish-English speaking child were collected during a period of 17 months (ages 6-8). Data revealed percentages and rank ordering of syntactic elements switched in the longitudinal language samples obtained. Specific recommendations for using code mixing in therapy for speech-language pathologists are…

  12. FLUKA: A Multi-Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2005-12-14

    This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.

  13. A simpler derivation of the coding theorem

    CERN Document Server

    Lomnitz, Yuval

    2012-01-01

    A simple proof for the Shannon coding theorem, using only the Markov inequality, is presented. The technique is useful for didactic purposes, since it does not require many preliminaries and the information density and mutual information follow naturally in the proof. It may also be applicable to situations where typicality is not natural.

  14. TOCAR: a code to interface FOURACES - CARNAVAL

    Energy Technology Data Exchange (ETDEWEB)

    Panini, G.C.; Vaccari, M.

    1981-08-01

    The TOCAR code, written in FORTRAN-IV for IBM-370 computers, is an interface between the output of the FOURACES code and the CARNAVAL binary format for the multigroup neutron cross-sections, scattering matrices and related quantities. Besides the description of the code and the how to use, the report contains the code listing.

  15. Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code. [PWR

    Energy Technology Data Exchange (ETDEWEB)

    Ramsthaler, J. A.; Lime, J. F.; Sahota, M. S.

    1978-12-01

    A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A.

  16. CAFE: A NEW RELATIVISTIC MHD CODE

    Energy Technology Data Exchange (ETDEWEB)

    Lora-Clavijo, F. D.; Cruz-Osorio, A. [Instituto de Astronomía, Universidad Nacional Autónoma de México, AP 70-264, Distrito Federal 04510, México (Mexico); Guzmán, F. S., E-mail: fdlora@astro.unam.mx, E-mail: aosorio@astro.unam.mx, E-mail: guzman@ifm.umich.mx [Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo. Edificio C-3, Cd. Universitaria, 58040 Morelia, Michoacán, México (Mexico)

    2015-06-22

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin–Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin–Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  17. CAFE: A New Relativistic MHD Code

    Science.gov (United States)

    Lora-Clavijo, F. D.; Cruz-Osorio, A.; Guzmán, F. S.

    2015-06-01

    We introduce CAFE, a new independent code designed to solve the equations of relativistic ideal magnetohydrodynamics (RMHD) in three dimensions. We present the standard tests for an RMHD code and for the relativistic hydrodynamics regime because we have not reported them before. The tests include the one-dimensional Riemann problems related to blast waves, head-on collisions of streams, and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the two-dimensional (2D) and 3D tests without magnetic field, we include the 2D Riemann problem, a one-dimensional shock tube along a diagonal, the high-speed Emery wind tunnel, the Kelvin-Helmholtz (KH) instability, a set of jets, and a 3D spherical blast wave, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion, a case of Kelvin-Helmholtz instability, and a 3D magnetic field advection loop. The code uses high-resolution shock-capturing methods, and we present the error analysis for a combination that uses the Harten, Lax, van Leer, and Einfeldt (HLLE) flux formula combined with a linear, piecewise parabolic method and fifth-order weighted essentially nonoscillatory reconstructors. We use the flux-constrained transport and the divergence cleaning methods to control the divergence-free magnetic field constraint.

  18. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  19. Code White: A Signed Code Protection Mechanism for Smartphones

    Science.gov (United States)

    2010-09-01

    if(TheSuperPage().KernelConfigFlags() & EKernelConfigPlatSecProcessIsolation) { diff -r 2ee5df201f60 kernel/eka/memmodel/ epoc /multiple...mprocess.cpp --- a/kernel/eka/memmodel/ epoc /multiple/mprocess.cpp Mon Mar 08 11:58:34 2010 +0000 +++ b/kernel/eka/memmodel/ epoc /multiple/mprocess.cpp Thu

  20. Extended Lorentz code of a superluminal particle

    CERN Document Server

    Ter-Kazarian, G

    2012-01-01

    While the OPERA experimental scrutiny is ongoing in the community, in the present article we construct a toy model of {\\it extended Lorentz code} (ELC) of the uniform motion, which will be a well established consistent and unique theoretical framework to explain the apparent violations of the standard Lorentz code (SLC), the possible manifestations of which arise in a similar way in all particle sectors. We argue that in the ELC-framework the propagation of the superluminal particle, which implies the modified dispersion relation, could be consistent with causality. Furthermore, in this framework, we give a justification of forbiddance of Vavilov-Cherenkov (VC)-radiation/or analog processes in vacuum. To be consistent with the SN1987A and OPERA data, we identify the neutrinos from SN1987A and the light as so-called {\\it 1-th type} particles carrying the {\\it individual Lorentz motion code} with the velocity of light $c_{1}\\equiv c$ in vacuum as maximum attainable velocity for all the 1-th type particles. Ther...

  1. CAFE: A New Relativistic MHD Code

    CERN Document Server

    Lora-Clavijo, F D; Guzman, F S

    2014-01-01

    We present CAFE, a new independent code designed to solve the equations of Relativistic ideal Magnetohydrodynamics (RMHD) in 3D. We present the standard tests for a RMHD code and for the Relativistic Hydrodynamics (RMD) regime since we have not reported them before. The tests include the 1D Riemann problems related to blast waves, head-on collision of streams and states with transverse velocities, with and without magnetic field, which is aligned or transverse, constant or discontinuous across the initial discontinuity. Among the 2D tests, without magnetic field we include the 2D Riemann problem, the high speed Emery wind tunnel, the Kelvin-Helmholtz instability test and a set of jets, whereas in the presence of a magnetic field we show the magnetic rotor, the cylindrical explosion and the Kelvin-Helmholtz instability. The code uses High Resolution Shock Capturing methods and as a standard set up we present the error analysis with a simple combination that uses the HLLE flux formula combined with linear, PPM ...

  2. A Students Attendance System Using QR Code

    Directory of Open Access Journals (Sweden)

    Fadi Masalha

    2014-01-01

    Full Text Available Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations.

  3. A Network Coding Approach to Loss Tomography

    DEFF Research Database (Denmark)

    Sattari, Pegah; Markopoulou, Athina; Fragouli, Christina;

    2013-01-01

    Network tomography aims at inferring internal network characteristics based on measurements at the edge of the network. In loss tomography, in particular, the characteristic of interest is the loss rate of individual links. There is a significant body of work dedicated to this problem using...... multicast and/or unicast end-to-end probes. Independently, recent advances in network coding have shown that there are several advantages from allowing intermediate nodes to process and combine, in addition to just forward, packets. In this paper, we pose the problem of loss tomography in networks that have...... and multiple paths between sources and receivers. This work was the first to make the connection between active network tomography and network coding, and thus opened a new research direction....

  4. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  5. PetriCode: A Tool for Template-Based Code Generation from CPN Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...

  6. A Contribution Towards A Grammar of Code

    Directory of Open Access Journals (Sweden)

    David M. Berry

    2008-01-01

    Full Text Available Over the past thirty years there has been an increasing interest in the social and cultural implications of digital technologies and ‘informationalism’ from the social sciences and humanities. Generally this has concentrated on the implications of the “convergence” of digital devices and services, understood as linked to the discrete processing capabilities of computers, which rely on logical operations, binary processing and symbolic representation. In this paper, I wish to suggest that a ‘grammar of code’ might provide a useful way of thinking about the way in which digital technologies operate as a medium and can contribute usefully to this wider debate. I am interested in the way in which the dynamic properties of code can be understood as operating according to a grammar reflected in its materialisation and operation in the lifeworld – the discretisation of the phenomenal world. As part of that contribution in this paper I develop some tentative Weberian ‘ideal-types’. These ideal-types are then applied to the work of the Japanese composer, Masahiro Miwa, whose innovative ‘Reverse-Simulation music’ models the operation of basic low-level digital circuitry for the performance and generation of musical pieces.

  7. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    Zhuo Li; Shen Lansun; Zhu Qing

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.

  8. Implementing peridynamics within a molecular dynamics code.

    Energy Technology Data Exchange (ETDEWEB)

    Lehoucq, Richard B.; Silling, Stewart Andrew; Plimpton, Steven James; Parks, Michael L.

    2007-12-01

    Peridynamics (PD) is a continuum theory that employs a nonlocal model to describe material properties. In this context, nonlocal means that continuum points separated by a finite distance may exert force upon each other. A meshless method results when PD is discretized with material behavior approximated as a collection of interacting particles. This paper describes how PD can be implemented within a molecular dynamics (MD) framework, and provides details of an efficient implementation. This adds a computational mechanics capability to an MD code, enabling simulations at mesoscopic or even macroscopic length and time scales.

  9. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...

  10. A New Arithmetic Coding System Combining Source Channel Coding and MAP Decoding

    Institute of Scientific and Technical Information of China (English)

    PANG Yu-ye; SUN Jun; WANG Jia

    2007-01-01

    A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed.It combines source coding and error correction tasks into one unified process by introducing an adaptive forbidden symbol.The proposed system achieves fixed length code words by adaptively adjusting the probability of the forbidden symbol and adding tail digits of variable length.The corresponding improved MAP decoding metric was derived.The proposed system can improve the performance.Simulations were performed on AWGN channels with various noise levels by using both hard and soft decision with BPSK modulation.The results show its performance is slightly better than that of our adaptive arithmetic error correcting coding system using a forbidden symbol.

  11. A Review on Spectral Amplitude Coding Optical Code Division Multiple Access

    Science.gov (United States)

    Kaur, Navpreet; Goyal, Rakesh; Rani, Monika

    2017-03-01

    This manuscript deals with analysis of Spectral Amplitude Coding Optical Code Division Multiple Access (SACOCDMA) system. The major noise source in optical CDMA is co-channel interference from other users known as multiple access interference (MAI). The system performance in terms of bit error rate (BER) degrades as a result of increased MAI. It is perceived that number of users and type of codes used for optical system directly decide the performance of system. MAI can be restricted by efficient designing of optical codes and implementing them with unique architecture to accommodate more number of users. Hence, it is a necessity to design a technique like spectral direct detection (SDD) technique with modified double weight code, which can provide better cardinality and good correlation property.

  12. A Readout Mechanism for Latency Codes

    Science.gov (United States)

    Zohar, Oran; Shamir, Maoz

    2016-01-01

    Response latency has been suggested as a possible source of information in the central nervous system when fast decisions are required. The accuracy of latency codes was studied in the past using a simplified readout algorithm termed the temporal-winner-take-all (tWTA). The tWTA is a competitive readout algorithm in which populations of neurons with a similar decision preference compete, and the algorithm selects according to the preference of the population that reaches the decision threshold first. It has been shown that this algorithm can account for accurate decisions among a small number of alternatives during short biologically relevant time periods. However, one of the major points of criticism of latency codes has been that it is unclear how can such a readout be implemented by the central nervous system. Here we show that the solution to this long standing puzzle may be rather simple. We suggest a mechanism that is based on reciprocal inhibition architecture, similar to that of the conventional winner-take-all, and show that under a wide range of parameters this mechanism is sufficient to implement the tWTA algorithm. This is done by first analyzing a rate toy model, and demonstrating its ability to discriminate short latency differences between its inputs. We then study the sensitivity of this mechanism to fine-tuning of its initial conditions, and show that it is robust to wide range of noise levels in the initial conditions. These results are then generalized to a Hodgkin-Huxley type of neuron model, using numerical simulations. Latency codes have been criticized for requiring a reliable stimulus-onset detection mechanism as a reference for measuring latency. Here we show that this frequent assumption does not hold, and that, an additional onset estimator is not needed to trigger this simple tWTA mechanism.

  13. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  14. Does a code make a difference – assessing the English code of practice on international recruitment

    Directory of Open Access Journals (Sweden)

    Mensah Kwadwo

    2009-04-01

    Full Text Available Abstract Background This paper draws from research completed in 2007 to assess the effect of the Department of Health, England, Code of Practice for the international recruitment of health professionals. The Department of Health in England introduced a Code of Practice for international recruitment for National Health Service employers in 2001. The Code required National Health Service employers not to actively recruit from low-income countries, unless there was government-to-government agreement. The Code was updated in 2004. Methods The paper examines trends in inflow of health professionals to the United Kingdom from other countries, using professional registration data and data on applications for work permits. The paper also provides more detailed information from two country case studies in Ghana and Kenya. Results Available data show a considerable reduction in inflow of health professionals, from the peak years up to 2002 (for nurses and 2004 (for doctors. There are multiple causes for this decline, including declining demand in the United Kingdom. In Ghana and Kenya it was found that active recruitment was perceived to have reduced significantly from the United Kingdom, but it is not clear the extent to which the Code was influential in this, or whether other factors such as a lack of vacancies in the United Kingdom explains it. Conclusion Active international recruitment of health professionals was an explicit policy intervention by the Department of Health in England, as one key element in achieving rapid staffing growth, particularly in the period 2000 to 2005, but the level of international recruitment has dropped significantly since early 2006. Regulatory and education changes in the United Kingdom in recent years have also made international entry more difficult. The potential to assess the effect of the Code in England is constrained by the limitations in available databases. This is a crucial lesson for those considering a

  15. Visual mismatch negativity: A predictive coding view

    Directory of Open Access Journals (Sweden)

    Gabor eStefanics

    2014-09-01

    Full Text Available An increasing number of studies investigate the visual mismatch negativity (vMMN or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist’s point of view in a predictive coding framework: 1 experimental protocols and procedures to control ‘refractoriness’ effects; 2 methods to control attention; 3 vMMN and veridical perception.

  16. A Code of Ethics for Referees?

    Science.gov (United States)

    Sturrock, Peter A.

    2004-04-01

    I have read with interest the many letters commenting on the pros and cons of anonymity for referees. While I sympathize with writers who have suffered from referees who are incompetent or uncivil, I also sympathize with those who argue that one would simply exchange one set of problems for another if journals were to require that all referees waive anonymity. Perhaps there is a more direct way to address the issue. It may help if guidelines for referees were to include a code of ethics.

  17. Optimality properties of a proposed precursor to the genetic code.

    Science.gov (United States)

    Butler, Thomas; Goldenfeld, Nigel

    2009-09-01

    We calculate the optimality score of a doublet precursor to the canonical genetic code with respect to mitigating the effects of point mutations and compare our results to corresponding ones for the canonical genetic code. We find that the proposed precursor is much less optimal than that of the canonical code. Our results render unlikely the notion that the doublet precursor was an intermediate state in the evolution of the canonical genetic code. These findings support the notion that code optimality reflects evolutionary dynamics, and that if such a doublet code originally had a biochemical significance, it arose before the emergence of translation.

  18. A surface code quantum computer in silicon.

    Science.gov (United States)

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  19. A graph model for opportunistic network coding

    KAUST Repository

    Sorour, Sameh

    2015-08-12

    © 2015 IEEE. Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunistic network coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a more generalized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation, we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique in this graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study on reducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.

  20. A construction of quantum turbo product codes based on CSS-type quantum convolutional codes

    Science.gov (United States)

    Xiao, Hailin; Ni, Ju; Xie, Wu; Ouyang, Shan

    As in classical coding theory, turbo product codes (TPCs) through serially concatenated block codes can achieve approximatively Shannon capacity limit and have low decoding complexity. However, special requirements in the quantum setting severely limit the structures of turbo product codes (QTPCs). To design a good structure for QTPCs, we present a new construction of QTPCs with the interleaved serial concatenation of CSS(L1,L2)-type quantum convolutional codes (QCCs). First, CSS(L1,L2)-type QCCs are proposed by exploiting the theory of CSS-type quantum stabilizer codes and QCCs, and the description and the analysis of encoder circuit are greatly simplified in the form of Hadamard gates and C-NOT gates. Second, the interleaved coded matrix of QTPCs is derived by quantum permutation SWAP gate definition. Finally, we prove the corresponding relation on the minimum Hamming distance of QTPCs associated with classical TPCs, and describe the state diagram of encoder and decoder of QTPCs that have a highly regular structure and simple design idea.

  1. A minimum-error, energy-constrained neural code is an instantaneous-rate code.

    Science.gov (United States)

    Johnson, Erik C; Jones, Douglas L; Ratnam, Rama

    2016-04-01

    Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.

  2. Serial turbo trellis coded modulation using a serially concatenated coder

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Pollara, Fabrizio (Inventor)

    2011-01-01

    Serial concatenated trellis coded modulation (SCTCM) includes an outer coder, an interleaver, a recursive inner coder and a mapping element. The outer coder receives data to be coded and produces outer coded data. The interleaver permutes the outer coded data to produce interleaved data. The recursive inner coder codes the interleaved data to produce inner coded data. The mapping element maps the inner coded data to a symbol. The recursive inner coder has a structure which facilitates iterative decoding of the symbols at a decoder system. The recursive inner coder and the mapping element are selected to maximize the effective free Euclidean distance of a trellis coded modulator formed from the recursive inner coder and the mapping element. The decoder system includes a demodulation unit, an inner SISO (soft-input soft-output) decoder, a deinterleaver, an outer SISO decoder, and an interleaver.

  3. What to do with a Dead Research Code

    Science.gov (United States)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  4. Coupling a Basin Modeling and a Seismic Code using MOAB

    KAUST Repository

    Yan, Mi

    2012-06-02

    We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.

  5. A Binary Representation of the Genetic Code

    CERN Document Server

    Nemzer, Louis R

    2016-01-01

    This article introduces a novel binary representation of the canonical genetic code, in which each of the four mRNA nucleotide bases is assigned a unique 2-bit identifier. These designations have a physiological meaning derived from the molecular structures of, and relationships between, the bases. In this scheme, the 64 possible triplet codons are each indexed by a 6-bit label. The order of the bits reflects the hierarchical organization manifested by the DNA replication/repair and tRNA translation systems. Transition and transversion mutations are naturally expressed as basic binary operations, and the severity of the different types is analyzed. Using a principal component analysis, it is shown that physicochemical properties of amino acids related to protein folding also correlate with particular bit positions of their respective labels. Thus, the likelihood for a particular point mutation to be conservative, and therefore less likely to cause a change in protein functionality, can be estimated.

  6. A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code

    Directory of Open Access Journals (Sweden)

    TARIQ SHAH

    2013-09-01

    Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.

  7. A new three-dimensional general-relativistic hydrodynamics code

    Science.gov (United States)

    Baiotti, L.; Hawke, I.; Montero, P. J.; Rezzolla, L.

    We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.

  8. A new three-dimensional general-relativistic hydrodynamics code

    CERN Document Server

    Baiotti, Luca; Montero, Pedro J; Rezzolla, Luciano

    2010-01-01

    We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.

  9. A Line Based Visualization of Code Evolution

    NARCIS (Netherlands)

    Voinea, S.L.; Telea, A.; Wijk, J.J. van

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization in

  10. On the Codes over a Semilocal Finite Ring

    Directory of Open Access Journals (Sweden)

    Abdullah Dertli

    2015-10-01

    Full Text Available In this paper, we study the structure of cyclic, quasi cyclic, constacyclic codes and their skew codes over the finite ring R. The Gray images of cyclic, quasi cyclic, skew cyclic, skew quasi cyclic and skew constacyclic codes over R are obtained. A necessary and sufficient condition for cyclic (negacyclic codes over R that contains its dual has been given. The parameters of quantum error correcting codes are obtained from both cyclic and negacyclic codes over R. Some examples are given. Firstly, quasi constacyclic and skew quasi constacyclic codes are introduced. By giving two inner product, it is investigated their duality. A sufficient condition for 1 generator skew quasi constacyclic codes to be free is determined.

  11. SYMTRAN - A Time-dependent Symmetric Tandem Mirror Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Hua, D; Fowler, T

    2004-06-15

    A time-dependent version of the steady-state radial transport model in symmetric tandem mirrors in Ref. [1] has been coded up and first tests performed. Our code, named SYMTRAN, is an adaptation of the earlier SPHERE code for spheromaks, now modified for tandem mirror physics. Motivated by Post's new concept of kinetic stabilization of symmetric mirrors, it is an extension of the earlier TAMRAC rate-equation code omitting radial transport [2], which successfully accounted for experimental results in TMX. The SYMTRAN code differs from the earlier tandem mirror radial transport code TMT in that our code is focused on axisymmetric tandem mirrors and classical diffusion, whereas TMT emphasized non-ambipolar transport in TMX and MFTF-B due to yin-yang plugs and non-symmetric transitions between the plugs and axisymmetric center cell. Both codes exhibit interesting but different non-linear behavior.

  12. Toward a Code of Conduct for Graduate Education

    Science.gov (United States)

    Proper, Eve

    2012-01-01

    Most academic disciplines promulgate codes of ethics that serve as public statements of professional norms of their membership. These codes serve both symbolic and practical purposes, stating to both members and the larger public what a discipline's highest ethics are. This article explores what scholarly society codes of ethics could say about…

  13. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina;

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...

  14. A Case for Dynamic Reverse-code Generation

    DEFF Research Database (Denmark)

    Lee, Jooyong

    2007-01-01

    . These implementations, however, inherently do not scale. As has often been said, the ultimate solution for backtracking is to use reverse code: executing the reverse code restores the previous states of a program. In our earlier work, we presented a method to generate reverse code on the fly while running a debugger....... This article presents a case study of dynamic reverse-code generation. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation can...

  15. A MCTF video coding scheme based on distributed source coding principles

    Science.gov (United States)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  16. Cyclic and constacyclic codes over a non-chain ring

    Directory of Open Access Journals (Sweden)

    Ayşegül Bayram

    2014-09-01

    Full Text Available In this study, we consider linear and especially cyclic codes over the non-chain ring $Z_{p}[v]/\\langle v^{p}-v\\rangle$ where $p$ is a prime. This is a generalization of the case $p=3.$ Further, in this work the structure of constacyclic codes are studied as well. This study takes advantage mainly from a Gray map which preserves the distance between codes over this ring and $p$-ary codes and moreover this map enlightens the structure of these codes. Furthermore, a MacWilliams type identity is presented together with some illustrative examples.

  17. Improved decoding for a concatenated coding system

    OpenAIRE

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, onl...

  18. Benchmark study between FIDAP and a cellular automata code

    Science.gov (United States)

    Akau, R. L.; Stockman, H. W.

    A fluid flow benchmark exercise was conducted to compare results between a cellular automata code and FIDAP. Cellular automata codes are free from gridding constraints, and are generally used to model slow (Reynolds number approximately 1) flows around complex solid obstacles. However, the accuracy of cellular automata codes at higher Reynolds numbers, where inertial terms are significant, is not well-documented. In order to validate the cellular automata code, two fluids problems were investigated. For both problems, flow was assumed to be laminar, two-dimensional, isothermal, incompressible and periodic. Results showed that the cellular automata code simulated the overall behavior of the flow field.

  19. A class of binary cyclic codes with five weights

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, the dual code of the binary cyclic code of length 2 n-1 with three zeros α, α t 1 and α t 2 is proven to have five nonzero Hamming weights in the case that n 4 is even and t1 = 2 n/2 + 1, t2 = 2 n-1-2 n/2+1 + 1 or 2 n/2 + 3, where α is a primitive element of the finite field F 2 n . The dual code is a divisible code of level n/2+1, and its weight distribution is also completely determined. When n = 4, the dual code satisfies Ward’s bound.

  20. A reflexive exploration of two qualitative data coding techniques

    Directory of Open Access Journals (Sweden)

    Erik Blair

    2016-01-01

    Full Text Available In an attempt to help find meaning within qualitative data, researchers commonly start by coding their data. There are a number of coding systems available to researchers and this reflexive account explores my reflections on the use of two such techniques. As part of a larger investigation, two pilot studies were undertaken as a means to examine the relative merits of open coding and template coding for examining transcripts. This article does not describe the research project per se but attempts to step back and offer a reflexive account of the development of data coding tools. Here I reflect upon and evaluate the two data coding techniques that were piloted, and discuss how using appropriate aspects of both led to the development of my final data coding approach. My exploration found there was no clear-cut ‘best’ option but that the data coding techniques needed to be reflexively-aligned to meet the specific needs of my project. This reflection suggests that, when coding qualitative data, researchers should be methodologically thoughtful when they attempt to apply any data coding technique; that they do not assume pre-established tools are aligned to their particular paradigm; and that they consider combining and refining established techniques as a means to define their own specific codes. DOI: 10.2458/azu_jmmss.v6i1.18772DOI: 10.2458/azu_jmmss.v6i1.18772

  1. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  2. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    ZhuoLi; ShenLanusun

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter,which is based on human visual characteristics.This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images.Finally,a fine granular joint source channel coding is proposed based on the source coding method,which not only utilizes the network resources efficiently,but guarantees the reliable transmission of video information.

  3. An improved canine genome and a comprehensive catalogue of coding genes and non-coding transcripts.

    Directory of Open Access Journals (Sweden)

    Marc P Hoeppner

    Full Text Available The domestic dog, Canis familiaris, is a well-established model system for mapping trait and disease loci. While the original draft sequence was of good quality, gaps were abundant particularly in promoter regions of the genome, negatively impacting the annotation and study of candidate genes. Here, we present an improved genome build, canFam3.1, which includes 85 MB of novel sequence and now covers 99.8% of the euchromatic portion of the genome. We also present multiple RNA-Sequencing data sets from 10 different canine tissues to catalog ∼175,000 expressed loci. While about 90% of the coding genes previously annotated by EnsEMBL have measurable expression in at least one sample, the number of transcript isoforms detected by our data expands the EnsEMBL annotations by a factor of four. Syntenic comparison with the human genome revealed an additional ∼3,000 loci that are characterized as protein coding in human and were also expressed in the dog, suggesting that those were previously not annotated in the EnsEMBL canine gene set. In addition to ∼20,700 high-confidence protein coding loci, we found ∼4,600 antisense transcripts overlapping exons of protein coding genes, ∼7,200 intergenic multi-exon transcripts without coding potential, likely candidates for long intergenic non-coding RNAs (lincRNAs and ∼11,000 transcripts were reported by two different library construction methods but did not fit any of the above categories. Of the lincRNAs, about 6,000 have no annotated orthologs in human or mouse. Functional analysis of two novel transcripts with shRNA in a mouse kidney cell line altered cell morphology and motility. All in all, we provide a much-improved annotation of the canine genome and suggest regulatory functions for several of the novel non-coding transcripts.

  4. Codes and morals: is there a missing link? (The Nuremberg Code revisited).

    Science.gov (United States)

    Hick, C

    1998-01-01

    Codes are a well known and popular but weak form of ethical regulation in medical practice. There is, however, a lack of research on the relations between moral judgments and ethical Codes, or on the possibility of morally justifying these Codes. Our analysis begins by showing, given the Nuremberg Code, how a typical reference to natural law has historically served as moral justification. We then indicate, following the analyses of H. T. Engelhardt, Jr., and A. MacIntyre, why such general moral justifications of codes must necessarily fail in a society of "moral strangers." Going beyond Engelhardt we argue, that after the genealogical suspicion in morals raised by Nietzsche, not even Engelhardt's "principle of permission" can be rationally justified in a strong sense--a problem of transcendental argumentation in morals already realized by I. Kant. Therefore, we propose to abandon the project of providing general justifications for moral judgements and to replace it with a hermeneutical analysis of ethical meanings in real-world situations, starting with the archetypal ethical situation, the encounter with the Other (E. Levinas).

  5. Oscillatory network coding of a global stimulus

    Science.gov (United States)

    Doiron, Brent; Longtin, Andre; Lindner, Benjamin

    2003-05-01

    The pyramidal cells of weakly electric fish respond to environmental broadband electrical stimuli. They have recently been shown to exhibit oscillations in mean firing rate in response to global stimuli that affect the whole body simultaneously similar to communication stimuli for these animals. In contrast, for spatially localized stimuli such as those produced by prey, the firing rate simply fluctuates around a constant mean. This switch in coding strategy relies on delayed negative (inhibitory) feedback connections in the neural network. We first summarize these experimental findings, as well as our mathematical modeling of this effect using a globally-coupled delayed inhibitory network of leaky integrate-and-fire neurons (LIF's). Here we study the mechanism of the transition from oscillatory to non-oscillatory firing states in such networks. This is done using simulations of a simpler network of LIF's with current based Gaussian white noise stimuli, rather than conductance based bandlimited Gaussian stimuli. We focus on the effect of feedback gain, current bias, and stimulus intensity on the oscillation under global conditions, and see how the decrease of these parameters brings on a response characteristic of the local case. These simulations are performed for a fixed amount of individual synaptic noise to each cell. We also show how insights into these results can be obtained from the analysis of stimulus-induced oscillations in a simpler rate model description of this spatially-extended excitable system.

  6. A New Evolutionary Algorithm Based on the Decimal Coding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Traditional Evolutionary Algorithm (EAs) is based on the binary code, real number code, structure code and so on. But these coding strategies have their own advantages and disadvantages for the optimization of functions. In this paper a new Decimal Coding Strategy (DCS) ,which is convenient for space division and alterable precision, was proposed, and the theory analysis of its implicit parallelism and convergence was also discussed. We also redesign several genetic operators for the decimal code. In order to utilize the historical information of the existing individuals in the process of evolution and avoid repeated exploring,the strategies of space shrinking and precision alterable, are adopted. Finally, the evolutionary algorithm based on decimal coding (DCEAs) was applied to the optimization of functions, the optimization of parameter, mixed-integer nonlinear programming. Comparison with traditional GAs was made and the experimental results show that the performances of DCEAS are better than the tradition GAs.

  7. A new electromagnetic code for ICRF antenna in EAST

    CERN Document Server

    Yang, Hua; Dong, Sa; Zhang, Xin-Jun; Zhao, Yan-Ping; Shang, Lei

    2015-01-01

    The demand for an effective tool to help in the design of ion cyclotron radio frequency (ICRF) antenna system for fusion experiment has driven the development of predictive codes. A new electromagnetic code based on the method of moments (MOM) is described in the paper. The code computes the electromagnetic field by the solution of the electric field integral equation. The structure of ICRF antennas are discretized with triangular mesh. By using the new code, the scattering parameter and the surface current are given and compared with the result by commercial code CST. Moreover, the power spectra are studied with different toroidal phases for heating and current drive. Good agreement of simulation results between the new code and CST are obtained. The code has been validated against CST for EAST ICRF antenna.

  8. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  9. A parallel memory architecture for video coding

    Institute of Scientific and Technical Information of China (English)

    Jian-ying PENG; Xiao-lang YAN; De-xian LI; Li-zhong CHEN

    2008-01-01

    To efficiently exploit the performance of single instruction multiple data (SIMD) architectures for video coding,a parallel memory architecture with power-of-two memory modules is proposed. It employs two novel skewing schemes to provide conflict-flee access to adjacent elements (8-bit and 16-bit data types) or with power-of-two intervals in both horizontal and vertical dircctions,which were not possible in previous parallel memory architectures. Area consumptions and delay estimations are given respectively with 4,8 and 16 memory modules. Under a 0.18-μm CMOS technology,the synthcsis results show that the proposed system can achieve 230 MHz clock frequency with 16 memory modules at the cost of 19k gates when read and write latcncies are 3 and 2 clock cycles,respectively. We implement the proposed parallel memory architecture on a video signal proccssor (VSP). The results show that VSP enhanced with the proposed architecture achieves 1.28x speedups for H.264 real-time decoding.

  10. A code generation framework for the ALMA common software

    Science.gov (United States)

    Troncoso, Nicolás; von Brand, Horst H.; Ibsen, Jorge; Mora, Matias; Gonzalez, Victor; Chiozzi, Gianluca; Jeram, Bogdan; Sommer, Heiko; Zamora, Gabriel; Tejeda, Alexis

    2010-07-01

    Code generation helps in smoothing the learning curve of a complex application framework and in reducing the number of Lines Of Code (LOC) that a developer needs to craft. The ALMA Common Software (ACS) has adopted code generation in specific areas, but we are now exploiting the more comprehensive approach of Model Driven code generation to transform directly an UML Model into a full implementation in the ACS framework. This approach makes it easier for newcomers to grasp the principles of the framework. Moreover, a lower handcrafted LOC reduces the error rate. Additional benefits achieved by model driven code generation are: software reuse, implicit application of design patterns and automatic tests generation. A model driven approach to design makes it also possible using the same model with different frameworks, by generating for different targets. The generation framework presented in this paper uses openArchitectureWare1 as the model to text translator. OpenArchitectureWare provides a powerful functional language that makes this easier to implement the correct mapping of data types, the main difficulty encountered in the translation process. The output is an ACS application readily usable by the developer, including the necessary deployment configuration, thus minimizing any configuration burden during testing. The specific application code is implemented by extending generated classes. Therefore, generated and manually crafted code are kept apart, simplifying the code generation process and aiding the developers by keeping a clean logical separation between the two. Our first results show that code generation improves dramatically the code productivity.

  11. A GPU code for analytic continuation through a sampling method

    Science.gov (United States)

    Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor

    We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  12. Code Design and Shuffled Iterative Decoding of a Quasi-Cyclic LDPC Coded OFDM System

    Institute of Scientific and Technical Information of China (English)

    LIU Binbin; BAI Dong; GE Qihong; MEI Shunliang

    2009-01-01

    In multipath environments,the error rate performance of orthogonal frequency division multiplexing (OFDM) is severely degraded by the deep fading subcarriers.Powerful error-correcting codes must be used with OFDM.This paper presents a quasi-cyclic low-density parity-check (LDPC) coded OFDM system,in which the redundant bits of each codeword are mapped to a higher-order modulation constellation.The optimal degree distribution was calculated using density evolution.The corresponding quasi-cyclic LDPC code was then constructed using circulant permutation matrices.Group shuffled message passing scheduling was used in the iterative decoding.Simulation results show that the system achieves better error rate performance and faster decoding convergence than conventional approaches on both additive white Gaussian noise (AWGN) and Rayleigh fading channels.

  13. A comparative study of seismic provisions between International Building Code 2003 and Uniform Building Code 1997

    Institute of Scientific and Technical Information of China (English)

    Wenshen Pong; Zu-Hsu Lee; Anson Lee

    2006-01-01

    This study focuses on the comparison of the Uniform Building Code (UBC) 1997 and International Building Code (IBC) 2003 in relation to the seismic design and analysis of special steel moment resisting frame buildings (SMRF).This paper formulates a numerical study of a steel SMRF building, studied in four different situations, namely: as an office building in San Francisco; as an office building in Sacramento; as an essential facility in San Francisco, and as an essential facility in Sacramento. The analytical results of the model buildings are then compared and analyzed taking note of any significant differences. This case study explores variations in the results obtained using the two codes, particularly the design base shear and drift ratios as they relate to different locations and occupancy use. This study also proves that IBC 2003is more stringent for the redundancy factor under design category E for the SMRF building, and drift limits for essential facilities.

  14. A Theoretical Method for Estimating Performance of Reed-Solomon Codes Concatenated with Orthogonal Space-Time Block Codes

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on the studies of Reed-Solomon codes and orthogonal space-time block codes over Rayleigh fading channel, a theoretical method for estimating performance of Reed-Solomon codes concatenated with orthogonal space-time block codes is presented in this paper. And an upper bound of the bit error rate is also obtained. It is shown through computer simulations that the signal-to-noise ratio reduces about 15 dB or more after orthogonal space-time block codes are concatenate with Reed-Solomon (15,6) codes over Rayleigh fading channel, when the bit error rate is 10-4.

  15. code_swarm: a design study in organic software visualization.

    Science.gov (United States)

    Ogawa, Michael; Ma, Kwan-Liu

    2009-01-01

    In May of 2008, we published online a series of software visualization videos using a method called code_swarm. Shortly thereafter, we made the code open source and its popularity took off. This paper is a study of our code swarm application, comprising its design, results and public response. We share our design methodology, including why we chose the organic information visualization technique, how we designed for both developers and a casual audience, and what lessons we learned from our experiment. We validate the results produced by code_swarm through a qualitative analysis and by gathering online user comments. Furthermore, we successfully released the code as open source, and the software community used it to visualize their own projects and shared their results as well. In the end, we believe code_swarm has positive implications for the future of organic information design and open source information visualization practice.

  16. RAYS: a geometrical optics code for EBT

    Energy Technology Data Exchange (ETDEWEB)

    Batchelor, D.B.; Goldfinger, R.C.

    1982-04-01

    The theory, structure, and operation of the code are described. Mathematical details of equilibrium subroutiones for slab, bumpy torus, and tokamak plasma geometry are presented. Wave dispersion and absorption subroutines are presented for frequencies ranging from ion cyclotron frequency to electron cyclotron frequency. Graphics postprocessors for RAYS output data are also described.

  17. A line-based visualization of code evolution

    OpenAIRE

    Voinea, SL Lucian; Telea, AC Alexandru; Wijk, van, M.N.

    2005-01-01

    The source code of software systems changes many times during the system lifecycle. We study how developers can get insight in these changes in order to understand the project context and the product artifacts. For this we propose new techniques for code evolution representation and visualization interaction from a version-centric perspective. Central to our approach is a line-based display of the changing code, where each file version is shown as a column and the horizontal axis shows time. ...

  18. Continuous Materiality: Through a Hierarchy of Computational Codes

    Directory of Open Access Journals (Sweden)

    Jichen Zhu

    2008-01-01

    Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.

  19. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    Directory of Open Access Journals (Sweden)

    Grégoire G.

    2016-01-01

    Full Text Available CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND format. This new format has many advantages compared to ENDF.

  20. Development of a Set of Neutron Kinetics Codes for CEFR

    Institute of Scientific and Technical Information of China (English)

    TIANHe-chun

    2003-01-01

    The function of some neutron kinetics analysis codes now used in CEFR is quite simple, which do not satisfy multi-purpose or detailed analysis requirements and their calculation accuracy is not so high.For this reason, it is necessary to develop a set of neutron kinetics codes for CEFR design, physical startup and operation. These developed codes include NKF, INHR, RHOT and DROP.

  1. A New Approach to Coding in Content Based MANETs

    OpenAIRE

    Joy, Joshua; Yu, Yu-Ting; Perez, Victor; Lu, Dennis; Gerla, Mario

    2015-01-01

    In content-based mobile ad hoc networks (CB-MANETs), random linear network coding (NC) can be used to reliably disseminate large files under intermittent connectivity. Conventional NC involves random unrestricted coding at intermediate nodes. This however is vulnerable to pollution attacks. To avoid attacks, a brute force approach is to restrict the mixing at the source. However, source restricted NC generally reduces the robustness of the code in the face of errors, losses and mobility induc...

  2. A Deterministic Transport Code for Space Environment Electrons

    Science.gov (United States)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  3. QR Codes in the Library: Are They Worth the Effort? Analysis of a QR Code Pilot Project

    OpenAIRE

    Wilson, Andrew M

    2012-01-01

    The literature is filled with potential uses for Quick Response (QR) codes in the library. Setting, but few library QR code projects have publicized usage statistics. A pilot project carried out in the Eda Kuhn Loeb Music Library of the Harvard College Library sought to determine whether library patrons actually understand and use QR codes. Results and analysis of the pilot project are provided, attempting to answer the question as to whether QR codes are worth the effort for libraries.

  4. A construction of fully diverse unitary space-time codes

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Fully diverse unitary space-time codes are useful in multiantenna communications, especially in multiantenna differential modulation. Recently, two constructions of parametric fully diverse unitary space-time codes for three antennas system have been introduced. We propose a new construction method based on the constructions. In the present paper, fully diverse codes for systems of odd prime number antennas are obtained from this construction. Space-time codes from present construction are found to have better error performance than many best known ones.

  5. A construction of fully diverse unitary space-time codes

    Institute of Scientific and Technical Information of China (English)

    YU Fei; TONG HongXi

    2009-01-01

    Fully diverse unitary space-time codes are useful in multiantenna communications,especially in multiantenna differential modulation.Recently,two constructions of parametric fully diverse unitary space-time codes for three antennas system have been introduced.We propose a new construction method based on the constructions.In the present paper,fully diverse codes for systems of odd prime number antennas are obtained from this construction.Space-time codes from present construction are found to have better error performance than many best known ones.

  6. A Modified Vertex—Based Shape Coding Algorithm

    Institute of Scientific and Technical Information of China (English)

    石旭利; 张兆扬

    2002-01-01

    This paper proposes a modified shape coding algorthm called modified vertex-based shape coding(MVBSC) to encode the boundary of a visual object compactly by using a modified polygonal approximation approach which uses modified curvature scale space (CSS) theory to extract feature0points.

  7. Code-Mixing as a Bilingual Instructional Strategy

    Science.gov (United States)

    Jiang, Yih-Lin Belinda; García, Georgia Earnest; Willis, Arlette Ingram

    2014-01-01

    This study investigated code-mixing practices, specifically the use of L2 (English) in an L1 (Chinese) class in a U.S. bilingual program. Our findings indicate that the code-mixing practices made and prompted by the teacher served five pedagogical functions: (a) to enhance students' bilingualism and bilingual learning, (b) to review and…

  8. Rationale for Student Dress Codes: A Review of School Handbooks

    Science.gov (United States)

    Freeburg, Elizabeth W.; Workman, Jane E.; Lentz-Hees, Elizabeth S.

    2004-01-01

    Through dress codes, schools establish rules governing student appearance. This study examined stated rationales for dress and appearance codes in secondary school handbooks; 182 handbooks were received. Of 150 handbooks containing a rationale, 117 related dress and appearance regulations to students' right to a non-disruptive educational…

  9. The RCVS codes of conduct: what's in a word?

    OpenAIRE

    Mcculloch, S.; Reiss, M.; Jinman, P.; Wathes, C.

    2014-01-01

    In 2012, the RCVS introduced a new Code of Professional Conduct for Veterinary Surgeons, replacing the Guide to Professional Conduct which had existed until then. Is a common Code relevant for the veterinarian's many roles? There's more to think about here than just the change of name, write Steven McCulloch, Michael Reiss, Peter Jinman and Christopher Wathes.

  10. Core-seis: a code for LMFBR core seismic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chellapandi, P.; Ravi, R.; Chetal, S.C.; Bhoje, S.B. [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Reactor Group

    1995-12-31

    This paper deals with a computer code CORE-SEIS specially developed for seismic analysis of LMFBR core configurations. For demonstrating the prediction capability of the code, results are presented for one of the MONJU reactor core mock ups which deals with a cluster of 37 subassemblies kept in water. (author). 3 refs., 7 figs., 2 tabs.

  11. TEA: A Code for Calculating Thermochemical Equilibrium Abundances

    CERN Document Server

    Blecic, Jasmina; Bowman, M Oliver

    2015-01-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. (1958) and Eriksson (1971). It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp (1999), the free thermochemical equilibrium code CEA (Chemical Equilibrium with Applications), and the example given by White et al. (1958). Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is ...

  12. Mock Code: A Code Blue Scenario Requested by and Developed for Registered Nurses

    Science.gov (United States)

    Rideout, Janice; Pritchett-Kelly, Sherry; McDonald, Melissa; Mullins-Richards, Paula; Dubrowski, Adam

    2016-01-01

    The use of simulation in medical training is quickly becoming more common, with applications in emergency, surgical, and nursing education. Recently, registered nurses working in surgical inpatient units requested a mock code simulation to practice skills, improve knowledge, and build self-confidence in a safe and controlled environment. A simulation scenario using a high-fidelity mannequin was developed and will be discussed herein. PMID:28123919

  13. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  14. Anytime coding on the infinite bandwidth AWGN channel: A sequential semi-orthogonal optimal code

    OpenAIRE

    Sahai, Anant

    2006-01-01

    It is well known that orthogonal coding can be used to approach the Shannon capacity of the power-constrained AWGN channel without a bandwidth constraint. This correspondence describes a semi-orthogonal variation of pulse position modulation that is sequential in nature -- bits can be ``streamed across'' without having to buffer up blocks of bits at the transmitter. ML decoding results in an exponentially small probability of error as a function of tolerated receiver delay and thus eventually...

  15. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang;

    2009-01-01

    We present a new design method for robust low-delay coding of auto-regressive (AR) sources for transmission across erasure channels. The method is based on Linear Predictive Coding (LPC) with Kalman estimation at the decoder. The method designs the encoder and decoder off-line through an iterative...

  16. The motivational interviewing skill code : Reliability and a critical appraisal

    NARCIS (Netherlands)

    de Jonge, JM; Schippers, GM; Schaap, CPDR

    2005-01-01

    The Motivational Interviewing Skill Code (MISC) is a coding system developed to measure adherence to motivational interviewing (MI). MI is an effective clinical style used in different treatment situations. Counsellors practising MI have to follow general principles and avoid certain traps. In the p

  17. A CFD code comparison of wind turbine wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Storey, R. C.; Sørensen, Niels N.;

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds...

  18. Coding as a Trojan Horse for Mathematics Education Reform

    Science.gov (United States)

    Gadanidis, George

    2015-01-01

    The history of mathematics educational reform is replete with innovations taken up enthusiastically by early adopters without significant transfer to other classrooms. This paper explores the coupling of coding and mathematics education to create the possibility that coding may serve as a Trojan Horse for mathematics education reform. That is,…

  19. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  20. A Practical Approach to Lossy Joint Source-Channel Coding

    CERN Document Server

    Fresia, Maria

    2007-01-01

    This work is devoted to practical joint source channel coding. Although the proposed approach has more general scope, for the sake of clarity we focus on a specific application example, namely, the transmission of digital images over noisy binary-input output-symmetric channels. The basic building blocks of most state-of the art source coders are: 1) a linear transformation; 2) scalar quantization of the transform coefficients; 3) probability modeling of the sequence of quantization indices; 4) an entropy coding stage. We identify the weakness of the conventional separated source-channel coding approach in the catastrophic behavior of the entropy coding stage. Hence, we replace this stage with linear coding, that maps directly the sequence of redundant quantizer output symbols into a channel codeword. We show that this approach does not entail any loss of optimality in the asymptotic regime of large block length. However, in the practical regime of finite block length and low decoding complexity our approach ...

  1. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  2. On DNA codes from a family of chain rings

    Directory of Open Access Journals (Sweden)

    Elif Segah Oztas

    2017-01-01

    Full Text Available In this work, we focus on reversible cyclic codes which correspond to reversible DNA codes or reversible-complement DNA codes over a family of finite chain rings, in an effort to extend what was done by Yildiz and Siap in [20]. The ring family that we have considered are of size $2^{2^k}$, $k=1,2, \\cdots$ and we match each ring element with a DNA $2^{k-1}$-mer. We use the so-called $u^2$-adic digit system to solve the reversibility problem and we characterize cyclic codes that correspond to reversible-complement DNA-codes. We then conclude our study with some examples.

  3. Analysis of LMFBR containment response to an HCDA using a multifield Eulerian code. [MICE code

    Energy Technology Data Exchange (ETDEWEB)

    Chu, H.Y.; Chang, Y.W.

    1977-01-01

    During a hypothetical core disruptive accident (HCDA), a core meltdown may cause the fuel cladding to rupture and the fuel fragments to penetrate into the sodium coolant. The heat in the molten fuel may cause the liquid sodium to boil, changing its phase. The interactions between materials are so complicated that a single-material model with homogenized material properties is not adequate. In order to analyze the above phenomena more realistically, a Multifield Implicit Continuous-Fluid Eulerian containment code (MICE) is being developed at Argonne National Laboratory (ANL) to solve the multifield fluid-flow problems in which the interpenetrations of materials, heat transfer, and phase changes are considered in the analysis. The hydrodynamics of the MICE code is based upon the implicit multifield (IMF) method developed by Harlow and Amsden. A partial donor-cell formulation is used for the calculation of the convective fluxes to minimize the truncation errors, while the Newton-Raphson method is used for the numerical iterations. An implicit treatment of the mass convection together with the equation of state for each material enables the method to be applicable to both compressible and incompressible flows. A partial implicit treatment of the momentum-exchange functions allows the coupling drag forces between two material fields to range from very weak to those strong enough to tie the fields completely. The differential equations and exchange functions used in the MICE code, and the treatment of the fluid and structure interactions as well as the numerical procedure are described. Two sample calculations are given to illustrate the present capability of the MICE code.

  4. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  5. A methodology for the rigorous verification of plasma simulation codes

    Science.gov (United States)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  6. Evolution of coding and non-coding genes in HOX clusters of a marsupial

    Directory of Open Access Journals (Sweden)

    Yu Hongshi

    2012-06-01

    Full Text Available Abstract Background The HOX gene clusters are thought to be highly conserved amongst mammals and other vertebrates, but the long non-coding RNAs have only been studied in detail in human and mouse. The sequencing of the kangaroo genome provides an opportunity to use comparative analyses to compare the HOX clusters of a mammal with a distinct body plan to those of other mammals. Results Here we report a comparative analysis of HOX gene clusters between an Australian marsupial of the kangaroo family and the eutherians. There was a strikingly high level of conservation of HOX gene sequence and structure and non-protein coding genes including the microRNAs miR-196a, miR-196b, miR-10a and miR-10b and the long non-coding RNAs HOTAIR, HOTAIRM1 and HOXA11AS that play critical roles in regulating gene expression and controlling development. By microRNA deep sequencing and comparative genomic analyses, two conserved microRNAs (miR-10a and miR-10b were identified and one new candidate microRNA with typical hairpin precursor structure that is expressed in both fibroblasts and testes was found. The prediction of microRNA target analysis showed that several known microRNA targets, such as miR-10, miR-414 and miR-464, were found in the tammar HOX clusters. In addition, several novel and putative miRNAs were identified that originated from elsewhere in the tammar genome and that target the tammar HOXB and HOXD clusters. Conclusions This study confirms that the emergence of known long non-coding RNAs in the HOX clusters clearly predate the marsupial-eutherian divergence 160 Ma ago. It also identified a new potentially functional microRNA as well as conserved miRNAs. These non-coding RNAs may participate in the regulation of HOX genes to influence the body plan of this marsupial.

  7. A Note on the Stopping Redundancy of Linear Codes

    Institute of Scientific and Technical Information of China (English)

    Shu-Tao Xia

    2006-01-01

    In this paper, we study the stopping sets, stopping distance and stopping redundancy for binary linear codes.Stopping redundancy is a new concept proposed by Schwartz and Vardy recently for evaluating the performance of a linear code under iterative decoding over a binary erasure channel (BEC). Since the exact value of stopping redundancy is difficult to obtain in general, good lower and upper bounds are important. We obtain a new general upper bound on the stopping redundancy of binary linear codes which improves the corresponding results of Schwartz and Vardy.

  8. A Fortran 90 code for magnetohydrodynamics. Part 1, Banded convolution

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1992-03-01

    This report describes progress in developing a Fortran 90 version of the KITE code for studying plasma instabilities in Tokamaks. In particular, the evaluation of convolution terms appearing in the numerical solution is discussed, and timing results are presented for runs performed on an 8k processor Connection Machine (CM-2). Estimates of the performance on a full-size 64k CM-2 are given, and range between 100 and 200 Mflops. The advantages of having a Fortran 90 version of the KITE code are stressed, and the future use of such a code on the newly announced CM5 and Paragon computers, from Thinking Machines Corporation and Intel, is considered.

  9. A Parallel Tree-SPH code for Galaxy Formation

    CERN Document Server

    Lia, C; Lia, Cesario; Carraro, Giovanni

    1999-01-01

    We describe a new implementation of a parallel Tree-SPH code with the aim to simulate Galaxy Formation and Evolution. The code has been parallelized using SHMEM, a Cray proprietary library to handle communications between the 256 processors of the Silicon Graphics T3E massively parallel supercomputer hosted by the Cineca Supercomputing Center (Bologna, Italy). The code combines the Smoothed Particle Hydrodynamics (SPH) method to solve hydro-dynamical equations with the popular Barnes and Hut (1986) tree-code to perform gravity calculation with a NlogN scaling, and it is based on the scalar Tree-SPH code developed by Carraro et al(1998)[MNRAS 297, 1021]. Parallelization is achieved distributing particles along processors according to a work-load criterion. Benchmarks, in terms of load-balance and scalability, of the code are analyzed and critically discussed against the adiabatic collapse of an isothermal gas sphere test using 20,000 particles on 8 processors. The code results balanced at more that 95% level. ...

  10. The Numerical Electromagnetics Code (NEC) - A Brief History

    Energy Technology Data Exchange (ETDEWEB)

    Burke, G J; Miller, E K; Poggio, A J

    2004-01-20

    The Numerical Electromagnetics Code, NEC as it is commonly known, continues to be one of the more widely used antenna modeling codes in existence. With several versions in use that reflect different levels of capability and availability, there are now 450 copies of NEC4 and 250 copies of NEC3 that have been distributed by Lawrence Livermore National Laboratory to a limited class of qualified recipients, and several hundred copies of NEC2 that had a recorded distribution by LLNL. These numbers do not account for numerous copies (perhaps 1000s) that were acquired through other means capitalizing on the open source code, the absence of distribution controls prior to NEC3 and the availability of versions on the Internet. In this paper we briefly review the history of the code that is concisely displayed in Figure 1. We will show how it capitalized on the research of prominent contributors in the early days of computational electromagnetics, how a combination of events led to the tri-service-supported code development program that ultimately led to NEC and how it evolved to the present day product. The authors apologize that space limitations do not allow us to provide a list of references or to acknowledge the numerous contributors to the code both of which can be found in the code documents.

  11. A Novel Block-Based Scheme for Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Qi-Bin Hou

    2014-06-01

    Full Text Available It is well-known that for a given sequence, its optimal codeword length is fixed. Many coding schemes have been proposed to make the codeword length as close to the optimal value as possible. In this paper, a new block-based coding scheme operating on the subsequences of a source sequence is proposed. It is proved that the optimal codeword lengths of the subsequences are not larger than that of the given sequence. Experimental results using arithmetic coding will be presented.

  12. SEQassembly: A Practical Tools Program for Coding Sequences Splicing

    Science.gov (United States)

    Lee, Hongbin; Yang, Hang; Fu, Lei; Qin, Long; Li, Huili; He, Feng; Wang, Bo; Wu, Xiaoming

    CDS (Coding Sequences) is a portion of mRNA sequences, which are composed by a number of exon sequence segments. The construction of CDS sequence is important for profound genetic analysis such as genotyping. A program in MATLAB environment is presented, which can process batch of samples sequences into code segments under the guide of reference exon models, and splice these code segments of same sample source into CDS according to the exon order in queue file. This program is useful in transcriptional polymorphism detection and gene function study.

  13. Software exorcism a handbook for debugging and optimizing legacy code

    CERN Document Server

    Blunden, Bill

    2013-01-01

    Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code takes an unflinching, no bulls and look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that you are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our atten

  14. A new hydrodynamics code for Type Ia Supernovae

    CERN Document Server

    Leung, S -C; Lin, L -M

    2015-01-01

    A two-dimensional hydrodynamics code for Type Ia supernovae (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric...

  15. A three-dimensional magnetostatics computer code for insertion devices.

    Science.gov (United States)

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.

  16. FIFPC, a fast ion Fokker--Planck code

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, R.H.; Callen, J.D.; Rome, J.A.; Smith, J.

    1976-07-01

    A computer code is described which solves the Fokker--Planck equation for the velocity space distribution of fast ions injected into a tokamak plasma. The numerical techniques are described and use of the code is outlined. The program is written in FORTRAN IV and is modularized in order to provide greater flexibility to the user. A program listing is provided and the results of sample cases are presented.

  17. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  18. RAMSES-CH: A New Chemodynamical Code for Cosmological Simulations

    OpenAIRE

    Few, C. Gareth; Courty, Stephanie; Gibson, Brad K.; Kawata, Daisuke; Calura, Francesco; Teyssier, Romain

    2012-01-01

    We present a new chemodynamical code - Ramses-CH - for use in simulating the self-consistent evolution of chemical and hydrodynamical properties of galaxies within a fully cosmological framework. We build upon the adaptive mesh refinement code Ramses, which includes a treatment of self-gravity, hydrodynamics, star formation, radiative cooling, and supernovae feedback, to trace the dominant isotopes of C, N, O, Ne, Mg, Si, and Fe. We include the contribution of Type Ia and II supernovae, in ad...

  19. RAMSES-CH: a new chemodynamical code for cosmological simulations

    OpenAIRE

    Few, C. G.; Courty, S.; Gibson, B. K.; Kawata, D; Calura, F.; Teyssier, R.

    2012-01-01

    We present a new chemodynamical code -RAMSES-CH- for use in simulating the self-consistent evolution of chemical and hydrodynamical properties of galaxies within a fully cosmological framework. We build upon the adaptive mesh refinement code RAMSES, which includes a treatment of self-gravity, hydrodynamics, star formation, radiative cooling and supernova feedback, to trace the dominant isotopes of C, N, O, Ne, Mg, Si and Fe. We include the contribution of Type Ia and Type II supernovae, in ad...

  20. POPCORN: A comparison of binary population synthesis codes

    CERN Document Server

    Claeys, J S W; Mennekens, N

    2012-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  1. POPCORN: A comparison of binary population synthesis codes

    Science.gov (United States)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  2. Djehuty A Code for Modeling Whole Stars in Three Dimensions

    CERN Document Server

    Turcotte, S; Castor, J I; Cavallo, R M; Cohl, H S; Cook, K; Dearborn, D S P; Dossa, D D; Eastman, R; Eggleton, P P; Eltgroth, P; Keller, S; Murray, S; Taylor, A

    2001-01-01

    The DJEHUTY project is an intensive effort at the Lawrence Livermore National Laboratory (LLNL) to produce a general purpose 3-D stellar structure and evolution code to study dynamic processes in whole stars.

  3. GPEC, a real-time capable Tokamak equilibrium code

    CERN Document Server

    Rampp, Markus; Fischer, Rainer

    2015-01-01

    A new parallel equilibrium reconstruction code for tokamak plasmas is presented. GPEC allows to compute equilibrium flux distributions sufficiently accurate to derive parameters for plasma control within 1 ms of runtime which enables real-time applications at the ASDEX Upgrade experiment (AUG) and other machines with a control cycle of at least this size. The underlying algorithms are based on the well-established offline-analysis code CLISTE, following the classical concept of iteratively solving the Grad-Shafranov equation and feeding in diagnostic signals from the experiment. The new code adopts a hybrid parallelization scheme for computing the equilibrium flux distribution and extends the fast, shared-memory-parallel Poisson solver which we have described previously by a distributed computation of the individual Poisson problems corresponding to different basis functions. The code is based entirely on open-source software components and runs on standard server hardware and software environments. The real-...

  4. Designing Nonlinear Turbo Codes with a Target Ones Density

    CERN Document Server

    Wang, Jiadong; Chen, Tsung-Yi; Xie, Bike; Wesel, Richard

    2011-01-01

    Certain binary asymmetric channels, such as Z-channels in which one of the two crossover probabilities is zero, demand optimal ones densities different from 50%. Some broadcast channels, such as broadcast binary symmetric channels (BBSC) where each component channel is a binary symmetric channel, also require a non-uniform input distribution due to the superposition coding scheme, which is known to achieve the boundary of capacity region. This paper presents a systematic technique for designing nonlinear turbo codes that are able to support ones densities different from 50%. To demonstrate the effectiveness of our design technique, we design and simulate nonlinear turbo codes for the Z-channel and the BBSC. The best nonlinear turbo code is less than 0.02 bits from capacity.

  5. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    Science.gov (United States)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, i.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  6. Adaptive EZW coding using a rate-distortion criterion

    Science.gov (United States)

    Yin, Che-Yi

    2001-07-01

    This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.

  7. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    CERN Document Server

    Lomax, O

    2016-01-01

    We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  8. A Very Fast and Momentum-Conserving Tree Code

    CERN Document Server

    Dehnen, W

    2000-01-01

    The tree code for the approximate evaluation of gravitational forces is extended and substantially accelerated by including mutual cell-cell interactions. These are computed by a Taylor series in Cartesian coordinates and in a completely symmetric fashion, such that Newton's third law is satisfied by construction and hence momentum exactly conserved. The computational effort is further reduced by exploiting the mutual symmetry of the interactions. For typical astrophysical problems with N=10^5 and at the same level of accuracy, the new code is about four times faster than the tree code. For large N, the computational costs are found to scale almost linearly with N, which can also be supported by a theoretical argument, and the advantage over the tree code increases with ever larger N.

  9. A modified phase-coding method for absolute phase retrieval

    Science.gov (United States)

    Xing, Y.; Quan, C.; Tay, C. J.

    2016-12-01

    Fringe projection technique is one of the most robust tools for three dimensional (3D) shape measurement. Various fringe projection methods have been proposed for addressing different issues in profilometry and phase-coding is one such technique employed to determine fringe orders for absolute phase retrieval. However this method is prone to fringe order error, while dealing with high-frequency fringes. This paper studies phase error introduced by system non-linearity in phase-coding and provides a mathematical model to obtain the maximum number of achievable codewords in a given scheme. In addition, a modified phase-coding method is also proposed for phase error compensation. Experimental study validates the theoretical analysis on the maximum number of achievable codewords and the performance of the modified phase-coding method is also illustrated.

  10. Passive stabilization in a linear MHD stability code

    Energy Technology Data Exchange (ETDEWEB)

    Todd, A.M.M.

    1980-03-01

    Utilizing a Galerkin procedure to calculate the vacuum contribution to the ideal MHD Lagrangian, the implementation of realistic boundary conditions are described in a linear stability code. The procedure permits calculation of the effect of arbitrary conducting structure on ideal MHD instabilities, as opposed to the prior use of an encircling shell. The passive stabilization of conducting coils on the tokamak vertical instability is calculated within the PEST code and gives excellent agreement with 2-D time dependent simulations of PDX.

  11. A new spherically symmetric general relativistic hydrodynamical code

    CERN Document Server

    Romero, J V; Martí, J M; Miralles, J A; Romero, Jose V; Ibanez, Jose M; Marti, Jose M; Miralles, Juan A

    1995-01-01

    In this paper we present a full general relativistic one-dimensional hydro-code which incorporates a modern high-resolution shock-capturing algorithm, with an approximate Riemann solver, for the correct modelling of formation and propagation of strong shocks. The efficiency of this code in treating strong shocks is demonstrated by some numerical experiments. The interest of this technique in several astrophysical scenarios is discussed.

  12. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    Science.gov (United States)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  13. A Monte Carlo code for ion beam therapy

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    Initially developed for applications in detector and accelerator physics, the modern Fluka Monte Carlo code is now used in many different areas of nuclear science. Over the last 25 years, the code has evolved to include new features, such as ion beam simulations. Given the growing use of these beams in cancer treatment, Fluka simulations are being used to design treatment plans in several hadron-therapy centres in Europe.   Fluka calculates the dose distribution for a patient treated at CNAO with proton beams. The colour-bar displays the normalized dose values. Fluka is a Monte Carlo code that very accurately simulates electromagnetic and nuclear interactions in matter. In the 1990s, in collaboration with NASA, the code was developed to predict potential radiation hazards received by space crews during possible future trips to Mars. Over the years, it has become the standard tool to investigate beam-machine interactions, radiation damage and radioprotection issues in the CERN accelerator com...

  14. The Nuremberg Code and the Nuremberg Trial. A reappraisal.

    Science.gov (United States)

    Katz, J

    1996-11-27

    The Nuremberg Code includes 10 principles to guide physician-investigators in experiments involving human subjects. These principles, particularly the first principle on "voluntary consent," primarily were based on legal concepts because medical codes of ethics existent at the time of the Nazi atrocities did not address consent and other safeguards for human subjects. The US judges who presided over the proceedings did not intend the Code to apply only to the case before them, to be a response to the atrocities committed by the Nazi physicians, or to be inapplicable to research as it is customarily carried on in medical institutions. Instead, a careful reading of the judgment suggests that they wrote the Code for the practice of human experimentation whenever it is being conducted.

  15. A New Video Coding Method Based on Improving Detail Regions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Moving Pictures Expert Group (MPEG) and H.263 standard coding method is widely used in video compression. However, the visual quality of detail regions such as eyes and mouth is not content in people at the decoder, as far as the conference telephone or videophone is concerned. A new coding method based on improving detail regions is presented in this paper. Experimental results show that this method can improve the visual quality at the decoder.

  16. Detecting Code-Switching in a Multilingual Alpine Heritage Corpus

    OpenAIRE

    Volk, Martin; Clematide, Simon

    2014-01-01

    This paper describes experiments in detecting and annotating code-switching in a large multilingual diachronic corpus of Swiss Alpine texts. The texts are in English, French, German, Italian, Romansh and Swiss German. Because of the multilingual authors (mountaineers, scientists) and the assumed multilingual readers, the texts contain numerous code-switching elements. When building and annotating the corpus, we faced issues of language identification on the sentence and sub-sentential level. ...

  17. A He I Case-B Recombination Code

    CERN Document Server

    Porter, R L

    2007-01-01

    Recent calculations of collisionless, Case-B, He I emissivities were performed by Bauman et al. (2005). The source code used in the calculation has been freely available online since that paper was published. A number of changes have been made to simplify the use of the code by third parties. Here I provide details on how to obtain, compile, and execute the program and interpret the results.

  18. Code Component Composition Reuse Is a New Programming Paradigm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    After describing the characteristics of programming paradigm,this pap er introduces the approach of code component composition reuse in detail, propos es and discusses viewpoint that code component composition reuse is a kind of ne w programming paradigm. This paper also specifies the characteristics of this ne w programming paradigm in detail, and points out some issues that must be resolv ed for using this new programming paradigm.

  19. Codes, standards, and PV power systems. A 1996 status report

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, J

    1996-06-01

    As photovoltaic (PV) electrical power systems gain increasing acceptance for both off-grid and utility-interactive applications, the safety, durability, and performance of these systems gains in importance. Local and state jurisdictions in many areas of the country require that all electrical power systems be installed in compliance with the requirements of the National Electrical Code{reg_sign} (NEC{reg_sign}). Utilities and governmental agencies are now requiring that PV installations and components also meet a number of Institute of Electrical and Electronic Engineers (IEEE) standards. PV installers are working more closely with licensed electricians and electrical contractors who are familiar with existing local codes and installation practices. PV manufacturers, utilities, balance of systems manufacturers, and standards representatives have come together to address safety and code related issues for future PV installations. This paper addresses why compliance with the accepted codes and standards is needed and how it is being achieved.

  20. A multi-scale code for flexible hybrid simulations

    CERN Document Server

    Leukkunen, L; Lopez-Acevedo, O

    2012-01-01

    Multi-scale computer simulations combine the computationally efficient classical algorithms with more expensive but also more accurate ab-initio quantum mechanical algorithms. This work describes one implementation of multi-scale computations using the Atomistic Simulation Environment (ASE). This implementation can mix classical codes like LAMMPS and the Density Functional Theory-based GPAW. Any combination of codes linked via the ASE interface however can be mixed. We also introduce a framework to easily add classical force fields calculators for ASE using LAMMPS, which also allows harnessing the full performance of classical-only molecular dynamics. Our work makes it possible to combine different simulation codes, quantum mechanical or classical, with great ease and minimal coding effort.

  1. A Comprehensive Validation Approach Using The RAVEN Code

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J; Rinaldi, Ivan; Giannetti, Fabio; Caruso, Gianfranco

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics needed to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.

  2. FLASH: A finite element computer code for variably saturated flow

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  3. Towards a framework for a code review process

    Directory of Open Access Journals (Sweden)

    Xiaosong Li

    Full Text Available This paper describes a longitudinal study of a Code Review Process (CRP which used an action research method. The CRP was used as one of the assessment methods in a third year (Level 7 undergraduate Web Application Development (WAD course. This paper reviews the past three cycles of the study in order to get a deeper understanding of the issues. To address the issues more effectively and better meet the students’ needs, the existing CRP is refined to develop a code review framework. The paper discusses different options and proposes a framework consisting of five components. To implement the framework, the course assessment scheme needs to be redesigned. Initially, the framework should be implemented in the same course. If successful, there is a potential to revise the framework and introduce it into similar courses at Levels 5, 6 and 8.

  4. Developing a code of ethics for academics. Commentary on 'Ethics for all: differences across scientific society codes' (Bullock and Panicker).

    Science.gov (United States)

    Fisher, Celia B

    2003-04-01

    This article discusses the possibilities and pitfalls of constructing a code of ethics for university professors. Professional, educational, legal, and policy questions regarding the goals, format, and content of an academic ethics code are raised and a series of aspirational principles and enforceable standards that might be included in such a document are presented for discussion and debate.

  5. HADES, A Code for Simulating a Variety of Radiographic Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  6. PROSA-1: a probabilistic response-surface analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J. K.; Mueller, C.

    1978-06-01

    Techniques for probabilistic response-surface analysis have been developed to obtain the probability distributions of the consequences of postulated nuclear-reactor accidents. The uncertainties of the consequences are caused by the variability of the system and model input parameters used in the accident analysis. Probability distributions are assigned to the input parameters, and parameter values are systematically chosen from these distributions. These input parameters are then used in deterministic consequence analyses performed by mechanistic accident-analysis codes. The results of these deterministic consequence analyses are used to generate the coefficients for analytical functions that approximate the consequences in terms of the selected input parameters. These approximating functions are used to generate the probability distributions of the consequences with random sampling being used to obtain values for the accident parameters from their distributions. A computer code PROSA has been developed for implementing the probabilistic response-surface technique. Special features of the code generate or treat sensitivities, statistical moments of the input and output variables, regionwise response surfaces, correlated input parameters, and conditional distributions. The code can also be used for calculating important distributions of the input parameters. The use of the code is illustrated in conjunction with the fast-running accident-analysis code SACO to provide probability studies of LMFBR hypothetical core-disruptive accidents. However, the methods and the programming are general and not limited to such applications.

  7. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    Science.gov (United States)

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  8. 25 CFR 18.111 - What will happen if a tribe repeals its probate code?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What will happen if a tribe repeals its probate code? 18... CODES Approval of Tribal Probate Codes § 18.111 What will happen if a tribe repeals its probate code? If a tribe repeals its tribal probate code: (a) The repeal will not become effective sooner than...

  9. A need for a code of ethics in science communication?

    Science.gov (United States)

    Benestad, R. E.

    2009-09-01

    The modern western civilization and high standard of living are to a large extent the 'fruits' of scientific endeavor over generations. Some examples include the longer life expectancy due to progress in medical sciences, and changes in infrastructure associated with the utilization of electromagnetism. Modern meteorology is not possible without the state-of-the-art digital computers, satellites, remote sensing, and communications. Science also is of relevance for policy making, e.g. the present hot topic of climate change. Climate scientists have recently become much exposed to media focus and mass communications, a task for which many are not trained. Furthermore, science, communication, and politics have different objectives, and do not necessarily mix. Scientists have an obligation to provide unbiased information, and a code of ethics is needed to give a guidance for acceptable and unacceptable conduct. Some examples of questionable conduct in Norway include using the title 'Ph.D' to imply scientific authority when the person never had obtained such an academic degree, or writing biased and one-sided articles in Norwegian encyclopedia that do not reflect the scientific consensus. It is proposed here that a set of guide lines (for the scientists and journalists) and a code of conduct could provide recommendation for regarding how to act in media - similar to a code of conduct with respect to carrying out research - to which everyone could agree, even when disagreeing on specific scientific questions.

  10. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  11. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  12. A neutron spectrum unfolding code based on iterative procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a {sup 6}Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a {sup 241}AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)

  13. A particle-based hybrid code for planet formation

    CERN Document Server

    Morishima, Ryuji

    2015-01-01

    We introduce a new particle-based hybrid code for planetary accretion. The code uses an $N$-body routine for interactions with planetary embryos while it can handle a large number of planetesimals using a super-particle approximation, in which a large number of small planetesimals are represented by a small number of tracers. Tracer-tracer interactions are handled by a statistical routine which uses the phase-averaged stirring and collision rates. We compare hybrid simulations with analytic predictions and pure $N$-body simulations for various problems in detail and find good agreements for all cases. The computational load on the portion of the statistical routine is comparable to or less than that for the $N$-body routine. The present code includes an option of hit-and-run bouncing but not fragmentation, which remains for future work.

  14. IRIS: A Generic Three-Dimensional Radiative Transfer Code

    CERN Document Server

    Ibgui, L; Lanz, T; Stehlé, C

    2012-01-01

    We present IRIS, a new generic three-dimensional (3D) spectral radiative transfer code that generates synthetic spectra, or images. It can be used as a diagnostic tool for comparison with astrophysical observations or laboratory astrophysics experiments. We have developed a 3D short-characteristic solver that works with a 3D nonuniform Cartesian grid. We have implemented a piecewise cubic, locally monotonic, interpolation technique that dramatically reduces the numerical diffusion effect. The code takes into account the velocity gradient effect resulting in gradual Doppler shifts of photon frequencies and subsequent alterations of spectral line profiles. It can also handle periodic boundary conditions. This first version of the code assumes Local Thermodynamic Equilibrium (LTE) and no scattering. The opacities and source functions are specified by the user. In the near future, the capabilities of IRIS will be extended to allow for non-LTE and scattering modeling. IRIS has been validated through a number of te...

  15. The (not so) social Simon effect: a referential coding account.

    Science.gov (United States)

    Dolk, Thomas; Hommel, Bernhard; Prinz, Wolfgang; Liepelt, Roman

    2013-10-01

    The joint go-nogo Simon effect (social Simon effect, or joint cSE) has been considered as an index of automatic action/task co-representation. Recent findings, however, challenge extreme versions of this social co-representation account by suggesting that the (joint) cSE results from any sufficiently salient event that provides a reference for spatially coding one's own action. By manipulating the salient nature of reference-providing events in an auditory go-nogo Simon task, the present study indeed demonstrates that spatial reference events do not necessarily require social (Experiment 1) or movement features (Experiment 2) to induce action coding. As long as events attract attention in a bottom-up fashion (e.g., auditory rhythmic features; Experiment 3 and 4), events in an auditory go-nogo Simon task seem to be co-represented irrespective of the agent or object producing these events. This suggests that the cSE does not necessarily imply the co-representation of tasks. The theory of event coding provides a comprehensive account of the available evidence on the cSE: the presence of another salient event requires distinguishing the cognitive representation of one's own action from the representation of other events, which can be achieved by referential coding-the spatial coding of one's action relative to the other events.

  16. Development of a subchannel analysis code MATRA (Ver. {alpha})

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Y. J.; Hwang, D. H

    1998-04-01

    A subchannel analysis code MATRA-{alpha}, an interim version of MATRA, has been developed to be run on an IBM PC or HP WS based on the existing CDC CYBER mainframe version of COBRA-IV-I. This MATRA code is a thermal-hydraulic analysis code based on the subchannel approach for calculating the enthalpy and flow distribution in fuel assemblies and reactor cores for both steady-state and transient conditions. MATRA-{alpha} has been provided with an improved structure, various functions, and models to give the more convenient user environment and to increase the code accuracy, various functions, and models to give the more convenient user environment and to increase the code accuracy. Among them, the pressure drop model has been improved to be applied to non-square-lattice rod arrays, and the lateral transport models between adjacent subchannels have been improved to increase the accuracy in predicting two-phase flow phenomena. Also included in this report are the detailed instructions for input data preparation and for auxiliary pre-processors to serve as a guide to those who want to use MATRA-{alpha}. In addition, we compared the predictions of MATRA-{alpha} with the experimental data on the flow and enthalpy distribution in three sample rod-bundle cases to evaluate the performance of MATRA-{alpha}. All the results revealed that the prediction of MATRA-{alpha} were better than those of COBRA-IV-I. (author). 16 refs., 1 tab., 13 figs.

  17. Imaging The Genetic Code of a Virus

    Science.gov (United States)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  18. Start App: a coding experience between primary and secondary school

    Directory of Open Access Journals (Sweden)

    Filippo Bruni

    2016-04-01

    Full Text Available The paper presents a coding experience in primary school (“Colozza” in Campobasso. Within the theoretical framework offered by computational thinking, using App Inventor, it was created a calculator for smartphone in the Android environment. High school students (from a technical secondary school guided the pupils in primary school, making an interesting form of cooperation between primary and secondary schools. Start App: una esperienza di coding tra scuola primaria e scuola secondariaIl contributo presenta una esperienza di coding nella scuola primaria dell’Istituto Comprensivo statale “Colozza” di Campobasso. All’interno della cornice teorica offerta dal pensiero computazionale, utilizzando App Inventor, è stata realizzata una calcolatrice per smartphone in ambiente Android. A guidare gli allievi della primaria sono stati gli studenti dell’Istituto Tecnico Industriale “Marconi” di Campobasso realizzando una interessante forma di collaborazione tra istituti scolastici di ordine diverso.

  19. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Science.gov (United States)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  20. A New Shape-Coding Algorithm by Using Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    石旭利; 张兆杨

    2003-01-01

    In this paper, we propose a new shape-coding algorithm called wavelet-based shape coding (WBSC). Performing wavelet transform on the orientation of original planar curve gives the corners called corner-1 points and end of arcs that belong to the original curve. Each arc is represented by a broken line and the corners called corner-2 points of the broken line are extracted. A polygonal approximation of a contour is an ordered list of corner-1 points, ends of arcs and corner-2 points which are extracted by using the above algorithm. All of the points are called polygonal vertices which will be compressed by our adaptive arithmetic encoding. Experimental results show that our method reduces code bits by about 26% compared with the context-based arithmetic encoding (CAE) of MPEG-4, and the subjective quality of the reconstructed shape is better than that of CAE at the same Dn.

  1. Combined Viterbi Detector for a Balanced Code in Page Memories

    Institute of Scientific and Technical Information of China (English)

    Chen Duan-rong; Xie Chang-sheng; Pei Xian-deng

    2004-01-01

    Based on the two path metrics being equal at a merged node in the trellis employed to describe a Viterbi detector for the detection of data encoded with a rate 6∶8 balanced binary code in page-oriented optical memories, the combined Viterbi detector scheme is proposed to improve raw bit-error rate performance by mitigating the occurrence of a two-bit reversing error event in an estimated codeword for the balanced code. The effectiveness of the detection scheme is verified for different data quantizations using Monte Carlo simulations.

  2. SCAMPI: A code package for cross-section processing

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Petrie, L.M.; Bowman, S.M.; Broadhead, B.L.; Greene, N.M.; White, J.E.

    1996-04-01

    The SCAMPI code package consists of a set of SCALE and AMPX modules that have been assembled to facilitate user needs for preparation of problem-specific, multigroup cross-section libraries. The function of each module contained in the SCANTI code package is discussed, along with illustrations of their use in practical analyses. Ideas are presented for future work that can enable one-step processing from a fine-group, problem-independent library to a broad-group, problem-specific library ready for a shielding analysis.

  3. DISTRA: A CODE TO FIND INVISIBLE EXOPLANETS

    Directory of Open Access Journals (Sweden)

    D. D. Carpintero

    2014-01-01

    Full Text Available Dados los instantes de tr ́ansito de un exoplaneta, que diferir ́an de una serie kepler iana de tr ́ansitos de un problema de dos cuerpos si un segundo planeta que no transita est ́a perturbando a aqu ́el, res olvemos el problema inverso de encontrar los seis elementos orbitales y la masa de este segundo planeta. Esto es equivalente a un problema de optimizaci ́on en siete dimensiones, en el cual la funci ́on a minimizar es al guna medida de la diferencia entre los tr ́ansitos observados y los obtenidos al integrar el problem a de los tres cuerpos con el planeta que transita y el invisible; las siete variables dependientes son los elementos y la masa de este ́ultimo. Resolvemos este formidable problema num ́erico en dos etapas, aplicando como primer paso un algor itmo gen ́etico, y luego puliendo este resultado con un algoritmo simplex en 7 dimensiones. Aplicamos el algor itmo al sistema Kepler-9, en el cual hay dos planetas que transitan y por lo tanto el segundo planeta tiene elementos orbitales y masa conocidos.

  4. EVAPRED - A CODE FOR FATIGUE ANALYSIS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Dorin LOZICI-BRÎNZEI

    2010-03-01

    Full Text Available The fatigue can be, in fact, defined as: “failure under a repeated or otherwise varying load, which never reaches a level sufficient to cause failure in a single application”.Physical testing is clearly unrealistic for every design component. In most applications, fatigue-safe life design requires the prediction of the component fatigue life that accounts for predicted service loads and materials. The primary tool for both understanding and being able to predict and avoid fatigue has proven to be the finite element analysis (FEA. Computer-aided engineering (CAE programs use three major methods to determine the total fatigue life: Stress life (SN, Strain life (EN and Fracture Mechanics (FM. FEA can predict stress concentration areas and can help design engineers to predict how long their designs are likely to last before experiencing the onset of fatigue.

  5. Software Code Maintainability : A Literature Review

    Directory of Open Access Journals (Sweden)

    Berna Seref

    2016-05-01

    Full Text Available Software Maintainability is one of the most important quality attributes. To increase quality of a software, to manage software more efficient and to decrease cost of the software, maintainability, maintainability estimation and maintainability evaluation models have been proposed. However, the practical use of these models in software engineering tools and practice remained little due to their limitations or threats to validity. In this paper, results of our Literature Review about maintainability models, maintainability metrics and maintainability estimation are presented. Aim of this paper is providing a baseline for further searches and serving the needs of developers and customers.

  6. Requirements for a multifunctional code architecture

    Energy Technology Data Exchange (ETDEWEB)

    Tiihonen, O. [VTT Energy (Finland); Juslin, K. [VTT Automation (Finland)

    1997-07-01

    The present paper studies a set of requirements for a multifunctional simulation software architecture in the light of experiences gained in developing and using the APROS simulation environment. The huge steps taken in the development of computer hardware and software during the last ten years are changing the status of the traditional nuclear safety analysis software. The affordable computing power on the safety analysts table by far exceeds the possibilities offered to him/her ten years ago. At the same time the features of everyday office software tend to set standards to the way the input data and calculational results are managed.

  7. StarFinder: A code for stellar field analysis

    Science.gov (United States)

    Diolaiti, Emiliano; Bendinelli, Orazio; Bonaccini, Domenico; Close, Laird M.; Currie, Doug G.; Parmeggiani, Gianluigi

    2000-11-01

    StarFinder is an IDL code for the deep analysis of stellar fields, designed for Adaptive Optics well-sampled images with high and low Strehl ratio. The Point Spread Function is extracted directly from the frame, to take into account the actual structure of the instrumental response and the atmospheric effects. The code is written in IDL language and organized in the form of a self-contained widget-based application, provided with a series of tools for data visualization and analysis. A description of the method and some applications to Adaptive Optics data are presented.

  8. A Survey of Electric Laser Codes.

    Science.gov (United States)

    1983-06-01

    William F. Bailey (513) 255-2012 RfcD Associates Peter Crowell (S05) 844-3013 Joint Inst, for Lab. Astrophysik L. C. Pitchford (303) 492-8255...Morns (213) 341-9172 Physical Science, inc. Paul Lewi» (617) 933-8500 also Raymond Taylor (617) 546-7798 Rocketdyne E. Wheat ley (213... Pitchford (originator) Organization: to 11/28/BO J.I.L.A. Address:’-1- of Colorado, Boulder, CO 80309 After 1/1/81-Sandia Laboratories

  9. A HYDROCHEMICAL HYBRID CODE FOR ASTROPHYSICAL PROBLEMS. I. CODE VERIFICATION AND BENCHMARKS FOR A PHOTON-DOMINATED REGION (PDR)

    Energy Technology Data Exchange (ETDEWEB)

    Motoyama, Kazutaka [National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 (Japan); Morata, Oscar; Hasegawa, Tatsuhiko [Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan (China); Shang, Hsien; Krasnopolsky, Ruben, E-mail: shang@asiaa.sinica.edu.tw [Theoretical Institute for Advanced Research in Astrophysics, Academia Sinica, Taipei 10617, Taiwan (China)

    2015-07-20

    A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.

  10. A wavelet-based quadtree driven stereo image coding

    Science.gov (United States)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2009-02-01

    In this work, a new stereo image coding technique is proposed. The new approach integrates the coding of the residual image with the disparity map. The latter computed in the wavelet transform domain. The motivation behind using this transform is that it imitates some properties of the human visual system (HVS), particularly, the decomposition in the perspective canals. Therefore, using the wavelet transform allows for better perceptual image quality preservation. In order to estimate the disparity map, we used a quadtree segmentation in each wavelet frequency band. This segmentation has the advantage of minimizing the entropy. Dyadic squares in the subbands of target image that they are not matched with other in the reference image constitutes the residuals are coded by using an arithmetic codec. The obtained results are evaluated by using the SSIM and PSNR criteria.

  11. A Unique Perspective on Data Coding and Decoding

    Directory of Open Access Journals (Sweden)

    Wen-Yan Wang

    2010-12-01

    Full Text Available The concept of a loss-less data compression coding method is proposed, and a detailed description of each of its steps follows. Using the Calgary Corpus and Wikipedia data as the experimental samples and compared with existing algorithms, like PAQ or PPMstr, the new coding method could not only compress the source data, but also further re-compress the data produced by the other compression algorithms. The final files are smaller, and by comparison with the original compression ratio, at least 1% redundancy could be eliminated. The new method is simple and easy to realize. Its theoretical foundation is currently under study. The corresponding Matlab source code is provided in  the Appendix.

  12. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    Science.gov (United States)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  13. APC: A New Code for Atmospheric Polarization Computations

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  14. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  15. A labVIEW Code for PolSK encoding

    CERN Document Server

    Soorat, Ram; Vudayagiri, Ashok

    2015-01-01

    We have developed an integrated software module for use in free space Optical communication using Polarization Shift Keying. The module provides options to read the data to be transmitted from a file, convert this data to on/off code for laser diodes as well as measure the state of polarization of the received optical pulses. The Software bundle consists of separate transmitter and receiver components. The entire protocol involves handshaking commands, data transmission as well as an error correction based on post-processing Hamming 7,4 code. The module is developed using \\lv, a proprietary software development IDE from National Instruments Inc. USA

  16. A Practical View on Tunable Sparse Network Coding

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Shahbaz Badr, Arash; Cabrera Guerrero, Juan Alberto

    2015-01-01

    Tunable sparse network coding (TSNC) constitutes a promising concept for trading off computational complexity and delay performance. This paper advocates for the use of judicious feedback as a key not only to make TSNC practical, but also to deliver a highly consistent and controlled delay......) can result in a radical improvement of the complexity-delay trade-off....

  17. ELEFANT: a user-friendly multipurpose geodynamics code

    Directory of Open Access Journals (Sweden)

    C. Thieulot

    2014-07-01

    Full Text Available A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.

  18. Development of a predictive code for aircrew radiation exposure (PCAIRE)

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, B.J.; Bennett, L.G.I.; Green, A.R.; McCall, M.J.; Ellaschuk, B.; Pierre, M.; Butler, A.; Desormeaux, M. [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, Ontario (Canada)

    2003-07-01

    Jet aircrew are routinely exposed to levels of natural background radiation (i.e., galactic cosmic radiation) that are significantly higher than those present at ground level. This paper describes the method of collecting and analyzing radiation data from numerous worldwide flights, and the encapsulation of these results into a computer code (PCAIRE) for the prediction of the aircrew radiation exposure on any flight in the world at any period in the solar cycle. Predictions from the PCAIRE code were then compared to integral doses measured at commercial altitudes during experimental flights made by various research groups over the past five years over the given solar cycle. In general, the code predictions are in agreement with the measured data within {+-} 20%. An additional correlation has been developed for estimation of aircrew exposure resulting from solar particle events. (author)

  19. A Self-Terminated Interleaver Design for Turbo Codes

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Turbo codes can achieve excellent performance at low signal-to-noise ratio (SNR), but the performance can be severely degraded if no trellis termination is employed. This paper proved that if trellis termination bits were appended to RSC1, trellis of RSC2 could be terminated by designing the interleaver properly, consequently, derived the designing condition of such self-terminated interleaver (STI). Then we presented an algorithm of implementing a kind of STI, which could terminate RSC2 as well on condition that the RSC1 was terminated. We verified the performance of STI for turbo codes by simulation, and the simulation results showed that turbo codes with STI outperformed interleavers that could not terminate RSC2 as well.

  20. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    Science.gov (United States)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  1. A Radiation Shielding Code for Spacecraft and Its Validation

    Science.gov (United States)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  2. 10 CFR 50.55a - Codes and standards.

    Science.gov (United States)

    2010-01-01

    ..., the ASME Code for Operation and Maintenance of Nuclear Power Plants, ASME Code Case N-729-1, and ASME... the OM Code refer to the ASME Code for Operation and Maintenance of Nuclear Power Plants, and include... Plants Code Cases. Licensees may apply the ASME Operation and Maintenance Nuclear Power......

  3. Code Generation for a Simple First-Order Prover

    DEFF Research Database (Denmark)

    Villadsen, Jørgen; Schlichtkrull, Anders; Halkjær From, Andreas

    2016-01-01

    We present Standard ML code generation in Isabelle/HOL of a sound and complete prover for first-order logic, taking formalizations by Tom Ridge and others as the starting point. We also define a set of so-called unfolding rules and show how to use these as a simple prover, with the aim of using...

  4. A code for hadrontherapy treatment planning with the voxelscan method.

    Science.gov (United States)

    Berga, S; Bourhaleb, F; Cirio, R; Derkaoui, J; Gallice, B; Hamal, M; Marchetto, F; Rolando, V; Viscomi, S

    2000-11-01

    A code for the implementation of treatment plannings in hadrontherapy with an active scan beam is presented. The package can determine the fluence and energy of the beams for several thousand voxels in a few minutes. The performances of the program have been tested with a full simulation.

  5. A Learning Environment for English Vocabulary Using Quick Response Codes

    Science.gov (United States)

    Arikan, Yuksel Deniz; Ozen, Sevil Orhan

    2015-01-01

    This study focuses on the process of developing a learning environment that uses tablets and Quick Response (QR) codes to enhance participants' English language vocabulary knowledge. The author employed the concurrent triangulation strategy, a mixed research design. The study was conducted at a private school in Izmir, Turkey during the 2012-2013…

  6. The Genetic Code as a Periodic Table Algebraic Aspects

    CERN Document Server

    Bashford, J D

    2000-01-01

    The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, data can be fitted as low-order polynomials of the 6 coordinates in the 64-dimensional codon weight space. The work confirms and extends recent studies by Siemion of protein conformational parameters. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.

  7. A New Simple Ultrahigh Speed Decoding Algorithm for BCH Code

    Institute of Scientific and Technical Information of China (English)

    唐建军; 纪越峰

    2002-01-01

    In order to content with forward error correction (FEC) technology of the high-speed optical communication system, a new simple decoding algorithm for triple-error correcting Bose, Chaudhuri, and Hocquenghem (BCH) code is proposed. Without complicated matrix-operation or division-operation or intricate iterative algorithm, the algorithm is high efficient and high-speed because of its simplicity in structure. The result of hardware emulation confirms that the algorithm is feasible completely. Introduction of the parallel structure increases the speed of coding greatly. The algorithm can be used in the high-speed optical communication system and other fields.

  8. A Study of Code-switching in the College English Classroom

    Institute of Scientific and Technical Information of China (English)

    LEI Chun-xiao

    2015-01-01

    Code-switching is an important domain in the sociolingusitics. Since the 1970s, lots of linguists and experts has at⁃tached great importance to it. This paper is a tentative study of code-switching in the teaching English as a second language (TESL) from such aspects: the review of code-switching, principles adhered to the code-switching, factors which leads to the code-switching, and attitudes and functions of code-switching in the TESL.

  9. LACEwING: A New Moving Group Analysis Code

    Science.gov (United States)

    Riedel, Adric R.; Blunt, Sarah C.; Lambrides, Erini L.; Rice, Emily L.; Cruz, Kelle L.; Faherty, Jacqueline K.

    2017-03-01

    We present a new nearby young moving group (NYMG) kinematic membership analysis code, LocAting Constituent mEmbers In Nearby Groups (LACEwING), a new Catalog of Suspected Nearby Young Stars, a new list of bona fide members of moving groups, and a kinematic traceback code. LACEwING is a convergence-style algorithm with carefully vetted membership statistics based on a large numerical simulation of the Solar Neighborhood. Given spatial and kinematic information on stars, LACEwING calculates membership probabilities in 13 NYMGs and three open clusters within 100 pc. In addition to describing the inputs, methods, and products of the code, we provide comparisons of LACEwING to other popular kinematic moving group membership identification codes. As a proof of concept, we use LACEwING to reconsider the membership of 930 stellar systems in the Solar Neighborhood (within 100 pc) that have reported measurable lithium equivalent widths. We quantify the evidence in support of a population of young stars not attached to any NYMGs, which is a possible sign of new as-yet-undiscovered groups or of a field population of young stars.

  10. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    Science.gov (United States)

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  11. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang;

    2009-01-01

    signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...

  12. NERO- a post-maximum supernova radiation transport code

    Science.gov (United States)

    Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.

    2011-12-01

    The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.

  13. A code for optically thick and hot photoionized media

    CERN Document Server

    Dumont, A M; Collin, S

    2000-01-01

    We describe a code designed for hot media {(T $\\ge$} a few 10$^4$ K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (10$^{22}$ cm$^{-2}$), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons ...

  14. A novel chaotic encryption scheme based on arithmetic coding

    Energy Technology Data Exchange (ETDEWEB)

    Mi Bo [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)], E-mail: mi_bo@163.com; Liao Xiaofeng; Chen Yong [Department of Computer Science and Engineering, Chongqing University, Chongqing 400044 (China)

    2008-12-15

    In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail.

  15. NERO - A Post Maximum Supernova Radiation Transport Code

    CERN Document Server

    Maurer, I; Mazzali, P A; Taubenberger, S; Hachinger, S; Kromer, M; Sim, S; Hillebrandt, W

    2011-01-01

    The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a NLTE treatment of radiation transport has rarely been studied. In this paper we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand et al. (2011) and to the nebular code of Mazzali et al. (2001). All three codes have bee...

  16. A spectral synthesis code for rapid modelling of supernovae

    CERN Document Server

    Kerzendorf, Wolfgang E

    2014-01-01

    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to asses both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON, and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focussing on four sp...

  17. Bio—Cryptography: A Possible Coding Role for RNA Redundancy

    Science.gov (United States)

    Regoli, M.

    2009-03-01

    The RNA-Crypto System (shortly RCS) is a symmetric key algorithm to cipher data. The idea for this new algorithm starts from the observation of nature. In particular from the observation of RNA behavior and some of its properties. The RNA sequences have some sections called Introns. Introns, derived from the term "intragenic regions," are non-coding sections of precursor mRNA (pre-mRNA) or other RNAs, that are removed (spliced out of the RNA) before the mature RNA is formed. Once the introns have been spliced out of a pre-mRNA, the resulting mRNA sequence is ready to be translated into a protein. The corresponding parts of a gene are known as introns as well. The nature and the role of Introns in the pre-mRNA is not clear and it is under ponderous researches by biologists but, in our case, we will use the presence of Introns in the RNA-Crypto System output as a strong method to add chaotic non coding information and an unnecessary behavior in the access to the secret key to code the messages. In the RNA-Crypto System algorithm the introns are sections of the ciphered message with non-coding information as well as in the precursor mRNA.

  18. FILM-30: A Heat Transfer Properties Code for Water Coolant

    Energy Technology Data Exchange (ETDEWEB)

    MARSHALL, THERON D.

    2001-02-01

    A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function of temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating.

  19. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  20. Ensuring quality in the coding process: A key differentiator for the accurate interpretation of safety data

    Directory of Open Access Journals (Sweden)

    G Jaya Nair

    2013-01-01

    Full Text Available Medical coding and dictionaries for clinical trials have seen a wave of change over the past decade where emphasis on more standardized tools for coding and reporting clinical data has taken precedence. Coding personifies the backbone of clinical reporting as safety data reports primarily depend on the coded data. Hence, maintaining an optimum quality of coding is quintessential to the accurate analysis and interpretation of critical clinical data. The perception that medical coding is merely a process of assigning numeric/alphanumeric codes to clinical data needs to be revisited. The significance of quality coding and its impact on clinical reporting has been highlighted in this article.

  1. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  2. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  3. 20-Sim ANSI-C code on a 8051 target

    NARCIS (Netherlands)

    Geerlings, Joël

    2001-01-01

    In the forth-coming version of 20-sim the option code-generation for targets will be available. After selection of a template, it’s filled in with model specific information. Then this adapted template can be compiled and linked such that it can be run on the target. Theo Lammerink designed around t

  4. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  5. Construction and decoding of a class of algebraic geometry codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd;

    1989-01-01

    A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result...

  6. Connecting Neural Coding to Number Cognition: A Computational Account

    Science.gov (United States)

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  7. Broadcast Coded Slotted ALOHA: A Finite Frame Length Analysis

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre;

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives...

  8. Code-Switching in a College Mathematics Classroom

    Science.gov (United States)

    Chitera, Nancy

    2009-01-01

    This paper presents the findings that explored from the discourse practices of the mathematics teacher educators in initial teacher training colleges in Malawi. It examines how mathematics teacher educators construct a multilingual classroom and how they view code-switching. The discussion is based on pre-observation interviews with four…

  9. Harmfulness of Code Duplication - A Structured Review of the Evidence

    NARCIS (Netherlands)

    Hordijk, Wiebe; Ponisio, María Laura; Wieringa, Roel

    2009-01-01

    Duplication of code has long been thought to decrease changeability of systems, but recently doubts have been expressed whether this is true in general. This is a problem for researchers because it makes the value of research aimed against clones uncertain, and for practitioners as they cannot be su

  10. Tagalog-English Code Switching as a Mode of Discourse

    Science.gov (United States)

    Bautista, Maria Lourdes S.

    2004-01-01

    The alternation of Tagalog and English in informal discourse is a feature of the linguistic repertoire of educated, middle- and upper-class Filipinos. This paper describes the linguistic structure and sociolinguistic functions of Tagalog-English code switching (Taglish) as provided by various researchers through the years. It shows that the…

  11. CERN access card: Introduction of a bar code

    CERN Multimedia

    Relations with the Host States Service

    2004-01-01

    Before the latest version of the implementation measures relating to Operational Circular No. 2 comes into force, we would like to inform you that, in future, CERN access cards may bear a bar code to transcribe the holder's identification number. Relations with the Host States Service http://www.cern.ch/relations/ Tel. 72848

  12. CERN access cards - Introduction of a bar code (Reminder)

    CERN Multimedia

    Relations with the Host States Service

    2004-01-01

    In accordance with the latest revised version of the implementation measures relating to Operational Circular No. 2, CERN access cards may bear a bar code transcribing the holder's identification number (the revised version of this subsidiary document to the aforementioned Circular will be published shortly). Relations with the Host States Service http://www.cern.ch/relations/ relations.secretariat@cern.ch Tel. 72848

  13. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  14. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    Science.gov (United States)

    Massimo, F.; Atzeni, S.; Marocchino, A.

    2016-12-01

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for the solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.

  15. A New Efficient Hybrid Coding For Progressive Transmission Of Images

    Science.gov (United States)

    Akansu, Ali N.; Haddad, Richard A.

    1988-10-01

    The hybrid coding technique developed here involves a function of two concepts: progressive interactive image transmission coupled with transform differential coding. There are two notable features in this approach. First, a local average of an mxm (typically 5 x 5) pixel array is formed, quantized and transmitted to the receiver for a preliminary display. This initial pass provides a crude but recognizable image before any further processing or encoding. Upon request from the receiver, the technique then switches to an iterative transform differential encoding scheme. Each iteration progressively provides more image detail at the receiver as requested. Secondly, this hybrid coding technique uses a computationally efficient, real, orthogonal transform, called the Modified Hermite Transform(MHT) [1], to encode the difference image. This MHT is then compared with the Discrete Cosine Transform(DCT) [2] for the same hybrid algorithm. For the standard images tested, we found that the progressive differential coding method per-forms comparably to the well-known direct transform coding methods. The DCT was used as the standard in this traditional approach. This hybrid technique was within 5% of SNR peak-to-peak for the "LENA" image. Comparisons between MHT and DCT as the transform vehicle for the hybrid technique were also conducted. For a transform block size N=8, the DCT requires 50% more multiplications than the MHT. The price paid for this efficiency is modest. For the example tested ("LENA"), the DCT performance gain was 4.2 dB while the MHT was 3.8 dB.

  16. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    Science.gov (United States)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  17. The Plasma Simulation Code: A modern particle-in-cell code with load-balancing and GPU support

    CERN Document Server

    Germaschewski, Kai; Ahmadi, Narges; Wang, Liang; Abbott, Stephen; Ruhl, Hartmut; Bhattacharjee, Amitava

    2013-01-01

    Recent increases in supercomputing power, driven by the multi-core revolution and accelerators such as the IBM Cell processor, graphics processing units (GPUs) and Intel's Many Integrated Core (MIC) technology have enabled kinetic simulations of plasmas at unprecedented resolutions, but changing HPC architectures also come with challenges for writing efficient numerical codes. This paper describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We focus on two distinguishing feature of the code: patch-based load balancing using space-filling curves, and support for Nvidia GPUs, which achieves substantial speed-up of up to more than 6x on the Cray XK7 architecture compared to a CPU-only implementation.

  18. A Secure Network Coding Based on Broadcast Encryption in SDN

    Directory of Open Access Journals (Sweden)

    Yue Chen

    2016-01-01

    Full Text Available By allowing intermediate nodes to encode the received packets before sending them out, network coding improves the capacity and robustness of multicast applications. But it is vulnerable to the pollution attacks. Some signature schemes were proposed to thwart such attacks, but most of them need to be homomorphic that the keys cannot be generated and managed easily. In this paper, we propose a novel fast and secure switch network coding multicast (SSNC on the software defined networks (SDN. In our scheme, the complicated secure multicast management was separated from the fast data transmission based on the SDN. Multiple multicasts will be aggregated to one multicast group according to the requirements of services and the network status. Then, the controller will route aggregated multicast group with network coding; only the trusted switch will be allowed to join the network coding by using broadcast encryption. The proposed scheme can use the traditional cryptography without homomorphy, which greatly reduces the complexity of the computation and improves the efficiency of transmission.

  19. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    Science.gov (United States)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  20. Code Switching and Code-Mixing as a Communicative Strategy in Multilingual Discourse.

    Science.gov (United States)

    Tay, Mary W. J.

    1989-01-01

    Examines how code switching and mixing are used as communication strategies in multilingual communities and discusses how to establish solidarity and rapport in multilingual discourse. Examples from the main languages spoken in Singapore--English, Mandarin, Hokkien, and Teochew--are used. (Author/OD)

  1. The Energy Code has been passed; Le code de l'energie a ete adopte

    Energy Technology Data Exchange (ETDEWEB)

    Roche, C. [Faculte de droit et des sciences sociales de Poitiers, 86 (France)

    2011-05-15

    The Energy Code has been passed by the order 2011-504 (May 9, 2011). It deals with (1) the general organization of the energy sector (2) the control of the energy demand and of the renewable energy sources (3) the electric power (4) the natural gas (5) the hydroelectric power (6) the petroleum and (7) the heat and cold systems. (O.M.)

  2. ICOOL: A SIMULATION CODE FOR IONIZATION COOLING OF MUON BEAMS.

    Energy Technology Data Exchange (ETDEWEB)

    FERNOW,R.C.

    1999-03-25

    Current ideas [1,2] for designing a high luminosity muon collider require significant cooling of the phase space of the muon beams. The only known method that can cool the beams in a time comparable to the muon lifetime is ionization cooling [3,4]. This method requires directing the particles in the beam at a large angle through a low Z absorber material in a strong focusing magnetic channel and then restoring the longitudinal momentum with an rf cavity. We have developed a new 3-D tracking code ICOOL for examining possible configurations for muon cooling. A cooling system is described in terms of a series of longitudinal regions with associated material and field properties. The tracking takes place in a coordinate system that follows a reference orbit through the system. The code takes into account decays and interactions of {approx}50-500 MeV/c muons in matter. Material geometry regions include cylinders and wedges. A number of analytic models are provided for describing the field configurations. Simple diagnostics are built into the code, including calculation of emittances and correlations, longitudinal traces, histograms and scatter plots. A number of auxiliary files can be generated for post-processing analysis by the user.

  3. Nexus: A modular workflow management system for quantum simulation codes

    Science.gov (United States)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  4. Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance

    CERN Document Server

    Kohnert, Axel

    2008-01-01

    In this paper we construct constant dimension space codes with prescribed minimum distance. There is an increased interest in space codes since a paper by Koetter and Kschischang were they gave an application in network coding. There is also a connection to the theory of designs over finite fields. We will modify a method of Braun, Kerber and Laue which they used for the construction of designs over finite fields to do the construction of space codes. Using this approach we found many new constant dimension spaces codes with a larger number of codewords than previously known codes. We will finally give a table of the best found constant dimension space codes.

  5. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    Science.gov (United States)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  6. A Plastic Temporal Brain Code for Conscious State Generation

    Directory of Open Access Journals (Sweden)

    Birgitta Dresp-Langley

    2009-01-01

    Full Text Available Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms.

  7. The 2010 fib Model Code for Structural Concrete: A new approach to structural engineering

    NARCIS (Netherlands)

    Walraven, J.C.; Bigaj-Van Vliet, A.

    2011-01-01

    The fib Model Code is a recommendation for the design of reinforced and prestressed concrete which is intended to be a guiding document for future codes. Model Codes have been published before, in 1978 and 1990. The draft for fib Model Code 2010 was published in May 2010. The most important new elem

  8. The ICPC coding system in pharmacy : developing a subset, ICPC-Ph

    NARCIS (Netherlands)

    van Mil, JWF; Brenninkmeijer, R; Tromp, TFJ

    1998-01-01

    The ICPC system is a coding system developed for general medical practice, to be able to code the GP-patient encounters and other actions. Some of the codes can be easily used by community pharmacists to code complaints and diseases in pharmaceutical care practice. We developed a subset of the ICPC

  9. Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution

    Science.gov (United States)

    2010-10-01

    implementations of ISR, we could also use AES encryption with 128-bit blocks of code. We adopt a different approach to protect against key guessing...when downloading content. An attacker can perform SQL- injection using this variable, to retrieve all user passwords (e.g., by appending select pass...this problem by us- ing a stronger encryption algorithm such as AES or bit transposition for the randomization, possibly taking a performance hit. As we

  10. IRIS: a generic three-dimensional radiative transfer code

    Science.gov (United States)

    Ibgui, L.; Hubeny, I.; Lanz, T.; Stehlé, C.

    2013-01-01

    Context. For most astronomical objects, radiation is the only probe of their physical properties. Therefore, it is important to have the most elaborate theoretical tool to interpret observed spectra or images, thus providing invaluable information to build theoretical models of the physical nature, the structure, and the evolution of the studied objects. Aims: We present IRIS, a new generic three-dimensional (3D) spectral radiative transfer code that generates synthetic spectra, or images. It can be used as a diagnostic tool for comparison with astrophysical observations or laboratory astrophysics experiments. Methods: We have developed a 3D short-characteristic solver that works with a 3D nonuniform Cartesian grid. We have implemented a piecewise cubic, locally monotonic, interpolation technique that dramatically reduces the numerical diffusion effect. The code takes into account the velocity gradient effect resulting in gradual Doppler shifts of photon frequencies and subsequent alterations of spectral line profiles. It can also handle periodic boundary conditions. This first version of the code assumes local thermodynamic equilibrium (LTE) and no scattering. The opacities and source functions are specified by the user. In the near future, the capabilities of IRIS will be extended to allow for non-LTE and scattering modeling. Results: IRIS has been validated through a number of tests. We provide the results for the most relevant ones, in particular a searchlight beam test, a comparison with a 1D plane-parallel model, and a test of the velocity gradient effect. Conclusions: IRIS is a generic code to address a wide variety of astrophysical issues applied to different objects or structures, such as accretion shocks, jets in young stellar objects, stellar atmospheres, exoplanet atmospheres, accretion disks, rotating stellar winds, cosmological structures. It can also be applied to model laboratory astrophysics experiments, such as radiative shocks produced with high

  11. On a stochastic approach to a code performance estimation

    Science.gov (United States)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  12. Improving a Power Line Communications Standard with LDPC Codes

    Directory of Open Access Journals (Sweden)

    Hsu Christine

    2007-01-01

    Full Text Available We investigate a power line communications (PLC scheme that could be used to enhance the HomePlug 1.0 standard, specifically its ROBO mode which provides modest throughput for the worst case PLC channel. The scheme is based on using a low-density parity-check (LDPC code, in lieu of the concatenated Reed-Solomon and convolutional codes in ROBO mode. The PLC channel is modeled with multipath fading and Middleton's class A noise. Clipping is introduced to mitigate the effect of impulsive noise. A simple and effective method is devised to estimate the variance of the clipped noise for LDPC decoding. Simulation results show that the proposed scheme outperforms the HomePlug 1.0 ROBO mode and has lower computational complexity. The proposed scheme also dispenses with the repetition of information bits in ROBO mode to gain time diversity, resulting in 4-fold increase in physical layer throughput.

  13. ACDOS3: a further improved neutron dose-rate code

    Energy Technology Data Exchange (ETDEWEB)

    Martin, C.S.

    1982-07-01

    ACD0S3 is a computer code designed primarily to calculate the activities and dose rates produced by neutron activation in a variety of simple geometries. Neutron fluxes, in up to 50 groups and with energies up to 20 MeV, must be supplied as part of the input data. The neutron-source strength must also be supplied, or alternately, the code will compute it from neutral-beam operating parameters in the case where the source is a fusion-reactor injector. ACD0S3 differs from the previous version ACD0S2 in that additional geometries have been added, the neutron cross-section library has been updated, an estimate of the energy deposited by neutron reactions has been provided, and a significant increase in efficiency in reading the data libraries has been incorporated.

  14. RAMSES-CH: a new chemodynamical code for cosmological simulations

    Science.gov (United States)

    Few, C. G.; Courty, S.; Gibson, B. K.; Kawata, D.; Calura, F.; Teyssier, R.

    2012-07-01

    We present a new chemodynamical code -RAMSES-CH- for use in simulating the self-consistent evolution of chemical and hydrodynamical properties of galaxies within a fully cosmological framework. We build upon the adaptive mesh refinement code RAMSES, which includes a treatment of self-gravity, hydrodynamics, star formation, radiative cooling and supernova feedback, to trace the dominant isotopes of C, N, O, Ne, Mg, Si and Fe. We include the contribution of Type Ia and Type II supernovae, in addition to low- and intermediate-mass asymptotic giant branch stars, relaxing the instantaneous recycling approximation. The new chemical evolution modules are highly flexible and portable, lending themselves to ready exploration of variations in the underpinning stellar and nuclear physics. We apply RAMSES-CH to the cosmological simulation of a typical L★ galaxy, demonstrating the successful recovery of the basic empirical constraints regarding [α/Fe]-[Fe/H] and Type Ia/II supernova rates.

  15. RAMSES-CH: A New Chemodynamical Code for Cosmological Simulations

    CERN Document Server

    Few, C Gareth; Gibson, Brad K; Kawata, Daisuke; Calura, Francesco; Teyssier, Romain

    2012-01-01

    We present a new chemodynamical code - Ramses-CH - for use in simulating the self-consistent evolution of chemical and hydrodynamical properties of galaxies within a fully cosmological framework. We build upon the adaptive mesh refinement code Ramses, which includes a treatment of self-gravity, hydrodynamics, star formation, radiative cooling, and supernovae feedback, to trace the dominant isotopes of C, N, O, Ne, Mg, Si, and Fe. We include the contribution of Type Ia and II supernovae, in addition to low- and intermediate-mass asymptotic giant branch stars, relaxing the instantaneous recycling approximation. The new chemical evolution modules are highly flexible and portable, lending themselves to ready exploration of variations in the underpining stellar and nuclear physics. We apply Ramses-CH to the cosmological simulation of a typical L\\star galaxy, demonstrating the successful recovery of the basic empirical constraints regarding, [{\\alpha}/Fe]-[Fe/H] and Type Ia/II supernovae rates.

  16. Prodeto, a computer code for probabilistic fatigue design

    Energy Technology Data Exchange (ETDEWEB)

    Braam, H. [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C.J.; Thoegersen, M.L. [Risoe National Lab., Roskilde (Denmark); Ronold, K.O. [Det Norske Veritas, Hoevik (Norway)

    1999-03-01

    A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)

  17. A Cooperative Network Coding Strategy for the Interference Relay Channel

    CERN Document Server

    Bui, Huyen-Chi; Lacan, Jerome; Boucheret, Marie-Laure

    2012-01-01

    In this paper, we study an interference relay network with a satellite as relay. We propose a cooperative strategy based on physical layer network coding and superposition modulation decoding for uni-directional communications among users. The performance of our solution in terms of throughput is evaluated through capacity analysis and simulations that include practical constraints such as the lack of synchronization in time and frequency. We demonstrate throughputs significantly larger than the classical time sharing case.

  18. A User-Friendly Code to Diagnose Chromospheric Plasmas

    OpenAIRE

    2007-01-01

    The physical interpretation of spectropolarimetric observations of lines of neutral helium, such as those of the 10830 A multiplet, represents an excellent opportunity for investigating the magnetism of plasma structures in the solar chromosphere. Here we present a powerful forward modeling and inversion code that permits either to calculate the emergent intensity and polarization for any given magnetic field vector or to infer the dynamical and magnetic properties from the observed Stokes pr...

  19. A Dynamic Programming Approach To Length-Limited Huffman Coding

    CERN Document Server

    Golin, Mordecai

    2008-01-01

    The ``state-of-the-art'' in Length Limited Huffman Coding algorithms is the $\\Theta(ND)$-time, $\\Theta(N)$-space one of Hirschberg and Larmore, where $D\\le N$ is the length restriction on the code. This is a very clever, very problem specific, technique. In this note we show that there is a simple Dynamic-Programming (DP) method that solves the problem with the same time and space bounds. The fact that there was an $\\Theta(ND)$ time DP algorithm was previously known; it is a straightforward DP with the Monge property (which permits an order of magnitude speedup). It was not interesting, though, because it also required $\\Theta(ND)$ space. The main result of this paper is the technique developed for reducing the space. It is quite simple and applicable to many other problems modeled by DPs with the Monge property. We illustrate this with examples from web-proxy design and wireless mobile paging.

  20. A User-Friendly Code to Diagnose Chromospheric Plasmas

    CERN Document Server

    Ramos, A Asensio

    2007-01-01

    The physical interpretation of spectropolarimetric observations of lines of neutral helium, such as those of the 10830 A multiplet, represents an excellent opportunity for investigating the magnetism of plasma structures in the solar chromosphere. Here we present a powerful forward modeling and inversion code that permits either to calculate the emergent intensity and polarization for any given magnetic field vector or to infer the dynamical and magnetic properties from the observed Stokes profiles. This diagnostic tool is based on the quantum theory of spectral line polarization, which self-consistently accounts for the Hanle and Zeeman effects in the most general case of the incomplete Paschen-Back effect regime. We also take into account radiative transfer effects. An efficient numerical scheme based on global optimization methods has been applied. Our Stokes inversion code permits a fast and reliable determination of the global minimum.

  1. FARGO3D: A new GPU-oriented MHD code

    CERN Document Server

    Benítez-Llambay, Pablo

    2016-01-01

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on protoplanetary disks physics and planet-disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on both "Graphical Processing Units" (GPUs) or "Central Processing unit" (CPUs), achieving large speed up with respect to CPU cores. We describe our implementation choices, whi...

  2. Code forking in open-source software: a requirements perspective

    CERN Document Server

    Ernst, Neil A; Mylopoulos, John

    2010-01-01

    To fork a project is to copy the existing code base and move in a direction different than that of the erstwhile project leadership. Forking provides a rapid way to address new requirements by adapting an existing solution. However, it can also create a plethora of similar tools, and fragment the developer community. Hence, it is not always clear whether forking is the right strategy. In this paper, we describe a mixed-methods exploratory case study that investigated the process of forking a project. The study concerned the forking of an open-source tool for managing software projects, Trac. Trac was forked to address differing requirements in an academic setting. The paper makes two contributions to our understanding of code forking. First, our exploratory study generated several theories about code forking in open source projects, for further research. Second, we investigated one of these theories in depth, via a quantitative study. We conjectured that the features of the OSS forking process would allow new...

  3. Finding Code Clones for Refactoring with Clone Metrics : A Case Study of Open Source Software

    OpenAIRE

    Choi, Eunjong; Yoshida, Norihiro; IshioTakashi; Inoue, Katsuro; Sano, Tateki

    2011-01-01

    A code clone is a code fragment that has identical or similar code fragments to it in the source code. Code clone has been regarded as one of the factors that makes software maintenance more difficult. Therefore, to refactor code clones into one method is promising way to reduce maintenance cost in the future. In our previous study, we proposed a method to extract code clones for refactoring using clone metrics. We had conducted an empirical study on Java application developed by NEC Corporat...

  4. 76 FR 39039 - Establishment of a New Drug Code for Marihuana Extract

    Science.gov (United States)

    2011-07-05

    ... Enforcement Administration 21 CFR Part 1308 RIN 1117-AB33 Establishment of a New Drug Code for Marihuana... Controlled Substances Code Number (``Code Number'' or ``drug code'') under 21 CFR 1308.11 for ``Marihuana... material separately from quantities of marihuana. This in turn will aid in complying with relevant...

  5. Variation in clinical coding lists in UK general practice: a barrier to consistent data entry?

    Directory of Open Access Journals (Sweden)

    Tracy Waize

    2007-09-01

    Conclusions Current systems for clinical coding promote diversity rather than consistency of clinical coding. As the UK moves towards an integrated health IT system consistency of coding will become more important. A standardised, limited list of codes for primary care might help address this need.

  6. A User-Friendly Code to Diagnose Chromospheric Plasmas

    Science.gov (United States)

    Asensio Ramos, A.; Trujillo Bueno, J.

    2007-05-01

    The physical interpretation of spectropolarimetric observations of lines of neutral helium, such as those of the 10830 Å multiplet, represents an excellent opportunity for investigating the magnetism of plasma structures in the solar chromosphere. Here we present a powerful forward modeling and inversion code that permits either to calculate the emergent intensity and polarization for any given magnetic field vector or to infer the dynamical and magnetic properties from the observed Stokes profiles. This diagnostic tool is based on the quantum theory of spectral line polarization, which self-consistently accounts for the Hanle and Zeeman effects in the most general case of the incomplete Paschen-Back effect regime. We also take into account radiative transfer effects. An efficient numerical scheme based on global optimization methods has been applied. Our Stokes inversion code permits a fast and reliable determination of the global minimum.

  7. The genetic code as a periodic table: algebraic aspects.

    Science.gov (United States)

    Bashford, J D; Jarvis, P D

    2000-01-01

    The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.

  8. A Flexible Channel Coding Approach for Short-Length Codewords

    CERN Document Server

    Hernaez, Mikel; Del Ser, Javier

    2012-01-01

    This letter introduces a novel channel coding design framework for short-length codewords that permits balancing the tradeoff between the bit error rate floor and waterfall region by modifying a single real-valued parameter. The proposed approach is based on combining convolutional coding with a $q$-ary linear combination and unequal energy allocation, the latter being controlled by the aforementioned parameter. EXIT charts are used to shed light on the convergence characteristics of the associated iterative decoder, which is described in terms of factor graphs. Simulation results show that the proposed scheme is able to adjust its end-to-end error rate performance efficiently and easily, on the contrary to previous approaches that require a full code redesign when the error rate requirements of the application change. Simulations also show that, at mid-range bit-error rates, there is a small performance penalty with respect to the previous approaches. However, the EXIT chart analysis and the simulation resul...

  9. A CLASS OF LDPC CODE'S CONSTRUCTION BASED ON AN ITERATIVE RANDOM METHOD

    Institute of Scientific and Technical Information of China (English)

    Huang Zhonghu; Shen Lianfeng

    2006-01-01

    This letter gives a random construction for Low Density Parity Check (LDPC) codes, which uses an iterative algorithm to avoid short cycles in the Tanner graph. The construction method has great flexible choice in LDPC code's parameters including codelength, code rate, the least girth of the graph, the weight of column and row in the parity check matrix. The method can be applied to the irregular LDPC codes and strict regular LDPC codes. Systemic codes have many applications in digital communication, so this letter proposes a construction of the generator matrix of systemic LDPC codes from the parity check matrix. Simulations show that the method performs well with iterative decoding.

  10. A new neutron energy spectrum unfolding code using a two steps genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Shahabinejad, H., E-mail: shahabinejad1367@yahoo.com; Hosseini, S.A.; Sohrabpour, M.

    2016-03-01

    A new neutron spectrum unfolding code TGASU (Two-steps Genetic Algorithm Spectrum Unfolding) has been developed to unfold the neutron spectrum from a pulse height distribution which was calculated using the MCNPX-ESUT computational Monte Carlo code. To perform the unfolding process, the response matrices were generated using the MCNPX-ESUT computational code. Both one step (common GA) and two steps GAs have been implemented to unfold the neutron spectra. According to the obtained results, the new two steps GA code results has shown closer match in all energy regions and particularly in the high energy regions. The results of the TGASU code have been compared with those of the standard spectra, LSQR method and GAMCD code. The results of the TGASU code have been demonstrated to be more accurate than that of the existing computational codes for both under-determined and over-determined problems.

  11. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  12. A Hydrochemical Hybrid Code for Astrophysical Problems. I. Code Verification and Benchmarks for Photon-Dominated Region (PDR)

    CERN Document Server

    Motoyama, Kazutaka; Shang, Hsien; Krasnopolsky, Ruben; Hasegawa, Tatsuhiko

    2015-01-01

    A two dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in cylindrical coordinate system, and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves non-equilibrium chemistry and change of the energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H$_2$ are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics modules, and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is...

  13. Starfinder a code for crowded stellar fields analysis

    CERN Document Server

    Diolaiti, E; Bonaccini, D; Close, L M; Currie, D; Parmeggiani, G

    1999-01-01

    Starfinder is an IDL code for the deep analysis of stellar fields, designed for well-sampled images with high and low Strehl factor. An important feature is represented by the possibility to measure the anisoplanatic effect in wide-field Adaptive Optics observations and exploit this knowledge to improve the analysis of the observed field. A description of the method and applications to real AO data are presented.

  14. An Efficient Attack on a Code-Based Signature Scheme

    OpenAIRE

    Phesso, Aurélie; Tillich, Jean-Pierre

    2016-01-01

    International audience; Baldi et al. have introduced in [BBC + 13] a very novel code based signature scheme. However we will prove here that some of the bits of the signatures are correlated in this scheme and this allows an attack that recovers enough of the underlying secret structure to forge new signatures. This cryptanalysis was performed on the parameters which were devised for 80 bits of security and broke them with 100, 000 signatures originating from the same secret key.

  15. A Radiation Solver for the National Combustion Code

    Science.gov (United States)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  16. Teaching and Learning Pharmaceutical Code of Ethics as a Syllabus

    Directory of Open Access Journals (Sweden)

    A Shafiee

    2008-06-01

    Full Text Available "nPharmacy, being a profession which its activities are directly related to the health and wellbeing of the people and soci­ety has been described an ethical profession from earliest time. In the recent decades there has been a shift in the phar­macist role from dispensing to relationship with patients and health care providers and interfere the therapeutic process. Other branches of pharmacy such as producers, distributors and etc. will certainly have the same responsibilities. In this respect, student of pharmacy, besides his professional education needs learning social, behavioral, communicational sciences as well as the principles code of pharmaceutical ethics. Therefore, teaching and learning principles code of ethics seems as an obli­gation. These principles are a guide to the standards of conduct. Furthermore, rapid progress of biotechnology, nanotech­nology and increase cost of new drugs are factors presented the importance of the study of eth­ics in pharmacy. Therefore, setting syllabus in pharmacy law and ethics is a need for undergraduate and even post­graduate students. The code, therefore attempts to define principles to be born in mind. It is the pharmacist who must interpret them in the light of pharmacy prac­tice.

  17. Evaluation of coded aperture radiation detectors using a Bayesian approach

    Science.gov (United States)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  18. DANTSYS: A diffusion accelerated neutral particle transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.

    1995-06-01

    The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.

  19. 2D Implosion Simulations with a Kinetic Particle Code

    CERN Document Server

    Sagert, Irina; Strother, Terrance T

    2016-01-01

    We perform two-dimensional (2D) implosion simulations using a Monte Carlo kinetic particle code. The paper is motivated by the importance of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions. These cannot be fully captured by hydrodynamic simulations while kinetic methods, as the one presented in this study, are able to describe continuum and rarefied regimes within one approach. In the past, our code has been verified via traditional shock wave and fluid instability simulations. In the present work, we focus on setups that are closer to applications in ICF. We perform simple 2D disk implosion simulations using one particle species. The obtained results are compared to simulations using the hydrodynamics code RAGE. In a first study, the implosions are powered by energy deposition in the outer layers of the disk. We test the impact of the particle mean-free-path and find that while the width of the implosion shock broadens, its location as a function of time remains very similar. ...

  20. A CODE DESIGN CRITERIA FOR NOT FULLY CONNECTED CHANNEL

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    There are parallel channels which are not fully connected in practice, such as Frequency DivisionMultiplex (FDM or Orthogonal FDM) systems. Conventional space-time codes can be used for such parallelchannels but not the optimal. Based on the derivation of PEP expression for codes transmitted on parallel blockfading channels, criteria of codes design for not fully connected channels are proposed and are compared withTarokh's criteria for fully connected channel. New codes for such channels are provided by systematical andexhaustive search. Simulation results show that these codes offer better performance on parallel FDM channelsthan other known codes.

  1. A Scenario on the Stepwise Evolution of the Genetic Code

    Institute of Scientific and Technical Information of China (English)

    Jing-Fa; Xiao; Jun; Yu

    2007-01-01

    It is believed that in the RNA world the operational (ribozymes) and the infor- mational (riboscripts) RNA molecules were created with only three (adenosine, uridine, and guanosine) and two (adenosine and uridine) nucleosides, respectively, so that the genetic code started uncomplicated. Ribozymes subsequently evolved to be able to cut and paste themselves and riboscripts were acceptive to rigor- ous editing (adenosine to inosine); the intensive diversification of RNA molecules shaped novel cellular machineries that are capable of polymerizing amino acids-a new type of cellular building materials for life. Initially, the genetic code, encoding seven amino acids, was created only to distinguish purine and pyrimidine; it was later expanded in a stepwise way to encode 12, 15, and 20 amino acids through the relief of guanine from its roles as operational signals and through the recruitment of cytosine. Therefore, the maturation of the genetic code also coincided with (1) the departure of aminoacyl-tRNA synthetases (AARSs) from the primordial translation machinery, (2) the replacement of informational RNA by DNA, and (3) the co-evolution of AARSs and their cognate tRNAs. This model predicts gradual replacements of RNA-made molecular mechanisms, cellular processes by proteins, and informational exploitation by DNA.

  2. SULEC: Benchmarking a new ALE finite-element code

    Science.gov (United States)

    Buiter, S.; Ellis, S.

    2012-04-01

    We have developed a 2-D/3-D arbitrary lagrangian-eulerian (ALE) finite-element code, SULEC, based on known techniques from literature. SULEC is successful in tackling many of the problems faced by numerical models of lithosphere and mantle processes, such as the combination of viscous, elastic, and plastic rheologies, the presence of a free surface, the contrast in viscosity between lithosphere and the underlying asthenosphere, and the occurrence of large deformations including viscous flow and offset on shear zones. The aim of our presentation is (1) to describe SULEC, and (2) to present a set of analytical and numerical benchmarks that we use to continuously test our code. SULEC solves the incompressible momentum equation coupled with the energy equation. It uses a structured mesh that is built of quadrilateral or brick elements that can vary in size in all dimensions, allowing to achieve high resolutions where required. The elements are either linear in velocity with constant pressure, or quadratic in velocity with linear pressure. An accurate pressure field is obtained through an iterative penalty (Uzawa) formulation. Material properties are carried on tracer particles that are advected through the Eulerian mesh. Shear elasticity is implemented following the approach of Moresi et al. [J. Comp. Phys. 184, 2003], brittle materials deform following a Drucker-Prager criterion, and viscous flow is by temperature- and pressure-dependent power-law creep. The top boundary of our models is a true free surface (with free surface stabilisation) on which simple surface processes models may be imposed. We use a set of benchmarks that test viscous, viscoelastic, elastic and plastic deformation, temperature advection and conduction, free surface behaviour, and pressure computation. Part of our benchmark set is automated allowing easy testing of new code versions. Examples include Poiseuille flow, Couette flow, Stokes flow, relaxation of viscous topography, viscous pure shear

  3. Porting of a serial molecular dynamics code on MIMD platforms

    Energy Technology Data Exchange (ETDEWEB)

    Celino, M. [ENEA Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). HPCN Project

    1999-07-01

    A molecular dynamics (MD) code, utilized for the study of atomistic models of metallic systems has been parallelized for MIMD (multiple instructions multiple data) parallel platforms by means of the parallel virtual machine (PVM) message passing library. Since the parallelization implies modifications of the sequential algorithms, these are described from the point of view of the statistical mechanical theory. Furthermore, techniques and parallelization strategies utilized and the MD parallel code are described in detail. Benchmarks on several MIMD platforms (IBM SP1, SP2, Cray T3D, cluster of workstations) allow performances evaluation of the code versus the different characteristics of the parallel platforms. [Italian] Un codice seriale di dinamica molecolare (MD) utilizzato per lo studio di modelli atomici di materiali metallici e' stato parallelizzato per piattaforme parallele MIMD (multiple instructions multiple data) utilizzando librerie del parallel virtual machine (PVM). Poiche' l'operazione di parallelizzazione ha implicato la modifica degli algoritmi seriali del codice, questi vengono descritti ripercorrendo i concetti fondamentali della meccanica statistica. Inoltre sono presentate le tecniche e le strategie di parallelizzazione utilizzate descrivendo in dettaglio il codice parallelo di MD: Risultati di benchmark su diverse piattaforme MIMD (IBM SP1, SP2, Cray T3D, cluster of workstations) permettono di analizzare le performances del codice in funzione delle differenti caratteristiche delle piattaforme parallele.

  4. A Content-Centric Organization of the Genetic Code

    Institute of Scientific and Technical Information of China (English)

    Jun Yu

    2007-01-01

    The codon table for the canonical genetic code can be rearranged in such a way that the code is divided into four quarters and two halves according to the variability of their GC and purine contents, respectively. For prokaryotic genomes, when the genomic GC content increases, their amino acid contents tend to be restricted to the GC-rich quarter and the purine-content insensitive half, where all codons are fourfold degenerate and relatively mutation-tolerant. Conversely, when the genomic GC content decreases, most of the codons retract to the AU-rich quarter and the purine-content sensitive half; most of the codons not only remain encoding physicochemically diversified amino acids but also vary when transversion (between purine and pyrimidine) happens. Amino acids with sixfolddegenerate codons are distributed into all four quarters and across the two halves; their fourfold-degenerate codons are all partitioned into the purine-insensitive half in favorite of robustness against mutations. The features manifested in the rearranged codon table explain most of the intrinsic relationship between protein coding sequences (the informational content) and amino acid compositions (the functional content). The renovated codon table is useful in predicting abundant amino acids and positioning the amino acids with related or distinct physicochemical properties.

  5. Exploration of Extreme Mass Ratio Inspirals with a Tree Code

    Science.gov (United States)

    Miller, Michael

    Extreme mass ratio inspirals (EMRIs), in which a stellar-mass object spirals into a supermassive black hole, are critical gravitational wave sources for the Laser Interferometer Space Antenna (LISA) because of their potential as precise probes of strong gravity. They are although thought to contribute to the flares observed in a few active galactic nuclei that have been attributed to tidal disruption of stars. There are, however, large uncertainties about the rates and properties of EMRIs. The reason is that their galactic nuclear environments contain millions of stars around a central massive object, and their paths must be integrated with great precision to include properly effects such as secular resonances, which accumulate over many orbits. Progress is being made on all fronts, but current numerical options are either profoundly computationally intensive (direct N-body integrators, which in addition do not currently have the needed long-term accuracy) or require special symmetry or other simplifications that may compromise the realism of the results (Monte Carlo and Fokker-Planck codes). We propose to undertake extensive simulations of EMRIs using tree codes that we have adapted to the problem. Tree codes are much faster than direct N-body simulations, yet they are powerful and flexible enough to include nonideal physics such as triaxiality, arbitrary mass spectra, post-Newtonian corrections, and secular evolutionary effects such as resonant relaxation and Kozai oscillations to the equations of motion. We propose to extend our codes to include these effects and to allow separate tracking of special ? that will represent binaries, thus allowing us to follow their interactions and evolution. In our development we will compare our results for a few tens of thousands of particles with a state of the art direct N-body integrator, to evaluate the accuracy of our code and discern systematic effects. This will allow detailed yet fast examinations of large-N systems

  6. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    Science.gov (United States)

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  7. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  8. Numerical simulations of hydrodynamic instabilities: perturbation codes Pansy, Perle, and 2D code Chic applied to a realistic LIL target

    Energy Technology Data Exchange (ETDEWEB)

    Hallo, L.; Olazabal-Loume, M.; Maire, P.H.; Breil, J.; Schurtz, G. [CELIA, 33 - Talence (France); Morse, R.L. [Arizona Univ., Dept. of Nuclear Engineering, Tucson (United States)

    2006-06-15

    This paper deals with ablation front instabilities simulations in the context of direct drive inertial confinement fusion. A simplified deuterium-tritium target, representative of realistic target on LIL (laser integration line at Megajoule laser facility) is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our bi-dimensional hydrodynamic code Chic. Our work shows a good behaviour of all methods even for large wavenumbers during the acceleration phase of the ablation front. We also point out a good agreement between model and numerical predictions at ablation front during the shock wave transit.

  9. Q&A From ASCO's Coding and Reimbursement Hotline

    OpenAIRE

    2005-01-01

    This column provides oncology practitioners and their staff with important information about reimbursement, coding, coverage, and regulatory policies. Questions for future issues should be sent to or by calling the coding and reimbursement hotline at 703-299-1050.

  10. PORPST: A statistical postprocessor for the PORMC computer code

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, P.W.; Didier, B.T. (Pacific Northwest Lab., Richland, WA (United States))

    1991-06-01

    This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs.

  11. A robust CELP coder with source-dependent channel coding

    Science.gov (United States)

    Sukkar, Rafid A.; Kleijn, W. Bastiaan

    1990-01-01

    A CELP coder using Source Dependent Channel Encoding (SDCE) for optimal channel error protection is introduced. With SDCE, each of the CELP parameters are encoded by minimizing a perceptually meaningful error criterion under prevalent channel conditions. Unlike conventional channel coding schemes, SDCE allows for optimal balance between error detection and correction. The experimental results show that the CELP system is robust under various channel bit error rates and displays a graceful degradation in SSNR as the channel error rate increases. This is a desirable property to have in a coder since the exact channel conditions cannot usually be specified a priori.

  12. A primer on physical-layer network coding

    CERN Document Server

    Liew, Soung Chang; Zhang, Shengli

    2015-01-01

    The concept of physical-layer network coding (PNC) was proposed in 2006 for application in wireless networks. Since then it has developed into a subfield of communications and networking with a wide following. This book is a primer on PNC. It is the outcome of a set of lecture notes for a course for beginning graduate students at The Chinese University of Hong Kong. The target audience is expected to have some prior background knowledge in communication theory and wireless communications, but not working knowledge at the research level. Indeed, a goal of this book/course is to allow the reader

  13. NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE

    Institute of Scientific and Technical Information of China (English)

    Wang Feixue; Ou Gang; Zhuang Zhaowen

    2004-01-01

    A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.

  14. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    Science.gov (United States)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  15. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2008-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and fi

  16. A Framework for Reverse Engineering Large C++ Code Bases

    NARCIS (Netherlands)

    Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian

    2009-01-01

    When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and fi

  17. Development of a predictive code for aircrew radiation exposure.

    Science.gov (United States)

    McCall, M J; Lemay, F; Bean, M R; Lewis, B J; Bennett, L G I

    2009-10-01

    Using the empirical data measured by the Royal Military College with a tissue equivalent proportional counter, a model was derived to allow for the interpolation of the dose rate for any global position, altitude and date. Through integration of the dose-rate function over a great circle flight path or between various waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAire) was further developed to provide an estimate of the total dose equivalent on any route worldwide at any period in the solar cycle.

  18. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    Science.gov (United States)

    2011-03-24

    to the lens to allow only green light to pass through the lens system, thereby reducing any chromatic aberration . The apertures consist of chrome...coded aperture configurations. Also, a green P01 filter is added to the front of the lens to prevent chromatic aberration . To measure the ( )psf aI s...pointing vector to pixel plane coordinates n/a T C pIX Translation matrix from pixel plane coordinates to pointing vector n/a W (7] , ’ ,sJ Aberration

  19. A LONE code for the sparse control of quantum systems

    Science.gov (United States)

    Ciaramella, G.; Borzì, A.

    2016-03-01

    In many applications with quantum spin systems, control functions with a sparse and pulse-shaped structure are often required. These controls can be obtained by solving quantum optimal control problems with L1-penalized cost functionals. In this paper, the MATLAB package LONE is presented aimed to solving L1-penalized optimal control problems governed by unitary-operator quantum spin models. This package implements a new strategy that includes a globalized semi-smooth Krylov-Newton scheme and a continuation procedure. Results of numerical experiments demonstrate the ability of the LONE code in computing accurate sparse optimal control solutions.

  20. A portable virtual machine target for proof-carrying code

    DEFF Research Database (Denmark)

    Franz, Michael; Chandra, Deepak; Gal, Andreas;

    2005-01-01

    Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code. Existing virtual machines, such as the Java VM, have several drawbacks: First, the effort required for safety verification is considerable. Second and more...... subtly, the need to provide such verification by the code consumer inhibits the amount of optimization that can be performed by the code producer. This in turn makes justin-time compilation surprisingly expensive. Proof-Carrying Code, on the other hand, has its own set of limitations, among which...

  1. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data...

  2. On Cascade Source Coding with A Side Information "Vending Machine"

    CERN Document Server

    Ahmadi, Behzad; Choudhuri, Chiranjib; Mitra, Urbashi

    2012-01-01

    The model of a side information "vending machine" accounts for scenarios in which acquiring side information is costly and thus should be done efficiently. In this paper, the three-node cascade source coding problem is studied under the assumption that a side information vending machine is available either at the intermediate or at the end node. In both cases, a single-letter characterization of the available trade-offs among the rate, the distortions in the reconstructions at the intermediate and at the end node, and the cost in acquiring the side information are derived under given conditions.

  3. A Feminist Reading of Dan Brown's The Da Vinci Code

    OpenAIRE

    Haouam, Mohamed Nadjib

    2016-01-01

    This work deals with the representation of women in The Da Vinci Code. The points tackled within this work are the ways that the writer used to portray women. These women have been presented as characters who are empowered as strong women either being depicted as educated members of society or holding important jobs like keeper of important institutions as a church or an agent of DCPJ, a job where smartness is a prerequisite. There are many symbols like the Mona Lisa that are u...

  4. Surface code error correction on a defective lattice

    Science.gov (United States)

    Nagayama, Shota; Fowler, Austin G.; Horsman, Dominic; Devitt, Simon J.; Van Meter, Rodney

    2017-02-01

    The yield of physical qubits fabricated in the laboratory is much lower than that of classical transistors in production semiconductor fabrication. Actual implementations of quantum computers will be susceptible to loss in the form of physically faulty qubits. Though these physical faults must negatively affect the computation, we can deal with them by adapting error-correction schemes. In this paper we have simulated statically placed single-fault lattices and lattices with randomly placed faults at functional qubit yields of 80%, 90%, and 95%, showing practical performance of a defective surface code by employing actual circuit constructions and realistic errors on every gate, including identity gates. We extend Stace et al's superplaquettes solution against dynamic losses for the surface code to handle static losses such as physically faulty qubits [1]. The single-fault analysis shows that a static loss at the periphery of the lattice has less negative effect than a static loss at the center. The randomly faulty analysis shows that 95% yield is good enough to build a large-scale quantum computer. The local gate error rate threshold is ∼ 0.3 % , and a code distance of seven suppresses the residual error rate below the original error rate at p=0.1 % . 90% yield is also good enough when we discard badly fabricated quantum computation chips, while 80% yield does not show enough error suppression even when discarding 90% of the chips. We evaluated several metrics for predicting chip performance, and found that the average of the product of the number of data qubits and the cycle time of a stabilizer measurement of stabilizers gave the strongest correlation with logical error rates. Our analysis will help with selecting usable quantum computation chips from among the pool of all fabricated chips.

  5. A new class of codes for Boolean masking of cryptographic computations

    CERN Document Server

    Carlet, Claude; Kim, Jon-Lark; Solé, Patrick

    2011-01-01

    We introduce a new class of rate one half binary codes: complementary information set codes. A binary linear code of length 2n and dimension n is called a complementary information set code (CIS code for short) if it has two disjoint information sets. This class of codes contains self-dual codes as a subclass. It is connected to graph correlation immune Boolean functions of use in the security of hardware implementations of cryptographic primitives. Such codes permit to improve the cost of masking cryptographic algorithms against side channel attacks. In this paper we investigate this new class of codes: we give optimal or best known CIS codes of length < 132. We derive general constructions based on cyclic codes and on double circulant codes. We derive a Varshamov-Gilbert bound for long CIS codes, and show that they can all be classified in small lengths \\leq 12 by the building up construction. Some nonlinear S-boxes are constructed by using Z4-codes, based on the notion of dual distance of an unrestricte...

  6. NMACA Approach Used to Build a Secure Message Authentication Code

    Science.gov (United States)

    Alosaimy, Raed; Alghathbar, Khaled; Hafez, Alaaeldin M.; Eldefrawy, Mohamed H.

    Secure storage systems should consider the integrity and authentication of long-term stored information. When information is transferred through communication channels, different types of digital information can be represented, such as documents, images, and database tables. The authenticity of such information must be verified, especially when it is transferred through communication channels. Authentication verification techniques are used to verify that the information in an archive is authentic and has not been intentionally or maliciously altered. In addition to detecting malicious attacks, verifying the integrity also identifies data corruption. The purpose of Message Authentication Code (MAC) is to authenticate messages, where MAC algorithms are keyed hash functions. In most cases, MAC techniques use iterated hash functions, and these techniques are called iterated MACs. Such techniques usually use a MAC key as an input to the compression function, and this key is involved in the compression function, f, at every stage. Modification detection codes (MDCs) are un-keyed hash functions, and are widely used by authentication techniques such as MD4, MD5, SHA-1, and RIPEMD-160. There have been new attacks on hash functions such as MD5 and SHA-1, which requires the introduction of more secure hash functions. In this paper, we introduce a new MAC methodology that uses an input MAC key in the compression function, to change the order of the message words and shifting operation in the compression function. The new methodology can be used in conjunction with a wide range of modification detection code techniques. Using the SHA-1 algorithm as a model, a new (SHA-1)-MAC algorithm is presented. The (SHA-1)-MAC algorithm uses the MAC key to build the hash functions by defining the order for accessing source words and defining the number of bit positions for circular left shifts.

  7. A FAST PARAMETER ESTIMATION ALGORITHM FOR POLYPHASE CODED CW SIGNALS

    Institute of Scientific and Technical Information of China (English)

    Li Hong; Qin Yuliang; Wang Hongqiang; Li Yanpeng; Li Xiang

    2011-01-01

    A fast parameter estimation algorithm is discussed for a polyphase coded Continuous Waveform (CW) signal in Additive White Gaussian Noise (AWGN).The proposed estimator is based on the sum of the modulus square of the ambiguity function at the different Doppler shifts.An iterative refinement stage is proposed to avoid the effect of the spurious peaks that arise when the summation length of the estimator exceeds the subcode duration.The theoretical variance of the subcode rate estimate is derived.The Monte-Carlo simulation results show that the proposed estimator is highly accurate and effective at moderate Signal-to-Noise Ratio (SNR).

  8. pyro: A teaching code for computational astrophysical hydrodynamics

    CERN Document Server

    Zingale, Michael

    2013-01-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  9. pyro: A teaching code for computational astrophysical hydrodynamics

    Science.gov (United States)

    Zingale, M.

    2014-10-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  10. On the Feasibility of a Network Coded Mobile Storage Cloud

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Conventional cloud storage services offer relatively good reliability and performance in a cost-effective manner. However, they are typically structured in a centralized and highly controlled fashion. In more dynamic storage scenarios, these centralized approaches are unfeasible and developing...... decentralized storage approaches becomes critical. The novelty of this paper is the introduction of the highly dynamic distributed mobile cloud, which uses free resources on user devices to move storage to the edges of the network. At the core of our approach, lies the use of random linear network coding...... to simulate the processes governing user behavior to show feasibility of mobile storage clouds in real scenarios....

  11. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wei-Qun; /KIPAC, Menlo Park; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  12. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    Energy Technology Data Exchange (ETDEWEB)

    Thuc Bui

    2007-12-06

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  13. Strong Trinucleotide Circular Codes

    Directory of Open Access Journals (Sweden)

    Christian J. Michel

    2011-01-01

    Full Text Available Recently, we identified a hierarchy relation between trinucleotide comma-free codes and trinucleotide circular codes (see our previous works. Here, we extend our hierarchy with two new classes of codes, called DLD and LDL codes, which are stronger than the comma-free codes. We also prove that no circular code with 20 trinucleotides is a DLD code and that a circular code with 20 trinucleotides is comma-free if and only if it is a LDL code. Finally, we point out the possible role of the symmetric group ∑4 in the mathematical study of trinucleotide circular codes.

  14. BERTHA: A versatile transmission line and circuit code

    Science.gov (United States)

    Hinshelwood, D. D.

    1983-11-01

    An improved version of the NRL transmission line code of W. H. Lupton is presented. The capabilities of the original program were extended to allow magnetically insulated transmission lines, plasma opening switches, imploding plasma loads and discrete element electrical networks, for example, to be modeled. BERTHA is used to simulate any system that is represented by a configuration of transmission line elements. The electrical behavior of the system is calculated by repeatedly summing the reflected and transmitted waves at the ends of each element. This program is versatile, easy to use and easily implemented on desktop microcomputers.

  15. Polar Codes

    Science.gov (United States)

    2014-12-01

    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  16. Recommendations for computer code selection of a flow and transport code to be used in undisturbed vadose zone calculations for TWRS immobilized environmental analyses

    Energy Technology Data Exchange (ETDEWEB)

    VOOGD, J.A.

    1999-04-19

    An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis.

  17. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  18. A memristive spiking neuron with firing rate coding

    Directory of Open Access Journals (Sweden)

    Marina eIgnatov

    2015-10-01

    Full Text Available Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2 and on the chemical electromigration cell Ag/TiO2-x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926.

  19. A Network Coding Based Routing Protocol for Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xin Guan

    2012-04-01

    Full Text Available Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs. Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR.We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime.

  20. Composing Data Parallel Code for a SPARQL Graph Engine

    Energy Technology Data Exchange (ETDEWEB)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste; Haglin, David J.; Feo, John

    2013-09-08

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basic graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.

  1. FARGO3D: A NEW GPU-ORIENTED MHD CODE

    Energy Technology Data Exchange (ETDEWEB)

    Benitez-Llambay, Pablo [Instituto de Astronomía Teórica y Experimental, Observatorio Astronónomico, Universidad Nacional de Córdoba. Laprida 854, X5000BGR, Córdoba (Argentina); Masset, Frédéric S., E-mail: pbllambay@oac.unc.edu.ar, E-mail: masset@icf.unam.mx [Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México (UNAM), Apdo. Postal 48-3,62251-Cuernavaca, Morelos (Mexico)

    2016-03-15

    We present the FARGO3D code, recently publicly released. It is a magnetohydrodynamics code developed with special emphasis on the physics of protoplanetary disks and planet–disk interactions, and parallelized with MPI. The hydrodynamics algorithms are based on finite-difference upwind, dimensionally split methods. The magnetohydrodynamics algorithms consist of the constrained transport method to preserve the divergence-free property of the magnetic field to machine accuracy, coupled to a method of characteristics for the evaluation of electromotive forces and Lorentz forces. Orbital advection is implemented, and an N-body solver is included to simulate planets or stars interacting with the gas. We present our implementation in detail and present a number of widely known tests for comparison purposes. One strength of FARGO3D is that it can run on either graphical processing units (GPUs) or central processing units (CPUs), achieving large speed-up with respect to CPU cores. We describe our implementation choices, which allow a user with no prior knowledge of GPU programming to develop new routines for CPUs, and have them translated automatically for GPUs.

  2. 17 CFR 275.204A-1 - Investment adviser codes of ethics.

    Science.gov (United States)

    2010-04-01

    ... enforce a written code of ethics that, at a minimum, includes: (1) A standard (or standards) of business... ethics. 275.204A-1 Section 275.204A-1 Commodity and Securities Exchanges SECURITIES AND EXCHANGE... codes of ethics. (a) Adoption of code of ethics. If you are an investment adviser registered or...

  3. Easy as Pi: A Network Coding Raspberry Pi Testbed

    Directory of Open Access Journals (Sweden)

    Chres W. Sørensen

    2016-10-01

    Full Text Available In the near future, upcoming communications and storage networks are expected to tolerate major difficulties produced by huge amounts of data being generated from the Internet of Things (IoT. For these types of networks, strategies and mechanisms based on network coding have appeared as an alternative to overcome these difficulties in a holistic manner, e.g., without sacrificing the benefit of a given network metric when improving another. There has been recurrent issues on: (i making large-scale deployments akin to the Internet of Things; (ii assessing and (iii replicating the obtained results in preliminary studies. Therefore, finding testbeds that can deal with large-scale deployments and not lose historic data in order to evaluate these mechanisms are greatly needed and desirable from a research perspective. However, this can be hard to manage, not only due to the inherent costs of the hardware, but also due to maintenance challenges. In this paper, we present the required key steps to design, setup and maintain an inexpensive testbed using Raspberry Pi devices for communications and storage networks with network coding capabilities. This testbed can be utilized for any applications requiring results replicability.

  4. A Novel User Authentication Scheme Based on QR-Code

    Directory of Open Access Journals (Sweden)

    Kuan-Chieh Liao

    2010-08-01

    Full Text Available User authentication is one of the fundamental procedures to ensure secure communications and share system resources over an insecure public network channel.  Thus, a simple and efficient authentication mechanism is required for securing the network system in the real environment. In general, the password-based authentication mechanism provides the basic capability to prevent unauthorized access. Especially, the purpose of the one-time password is to make it more difficult to gain unauthorized access to restricted resources. Instead of using the password file as conventional authentication systems, many researchers have devoted to implement various one-time password schemes using smart cards, time-synchronized token or short message service in order to reduce the risk of tampering and maintenance cost.  However, these schemes are impractical because of the far from ubiquitous hardware devices or the infrastructure requirements. To remedy these weaknesses, the attraction of the QR-code technique can be introduced into our one-time password authentication protocol. Not the same as before, the proposed scheme based on QR code not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones. For this reason, instead of carrying around a separate hardware token for each security domain, the superiority of handiness benefit from the mobile phone makes our approach more practical and convenient.

  5. A RAMP CODE FOR FINE-GRAINED ACCESS CONTROL

    Directory of Open Access Journals (Sweden)

    Kannan Karthik

    2013-02-01

    Full Text Available Threshold ramp secret sharing schemes are designed so that (i certain subsets of shares have no information about the secret, (ii some subsets have partial information about the secret and (iii some subsets have complete information to recover the secret. However most of the ramp schemes in present literature do not control the leakage of information in partial access sets, due to which the information acquired by these sets is devoid of structure and not useful for fine-grained access control. Through a non-perfect secret sharing scheme called MIX-SPLIT, an encoding methodology for controlling the leakage in partial access sets is proposed and this is used for fine-grained access to binary strings. The ramp code generated using MIX-SPLIT requires a much smaller share size of O(n, as compared to Shamir's ramp adaptation which incurs a share size of atleast O(n2 for the same multi-access structure. The proposed ramp code is finally applied towards the protection and fine-grained access of industrial design drawings.

  6. Does the health of individuals have a mathematical code?

    Directory of Open Access Journals (Sweden)

    Ali Mehrabi Tavana

    2013-01-01

    Full Text Available The definition of health of individuals is well described by the World Health Organization (WHO and other International Health Organizations. Many studies have also been carried out in order to survey the health conditions in different countries based on this definition, therefore, the health condition of every country analyzed by the WHO. In this hypothesis, I would like to explain "whether the health of individuals has a mathematical code or not? If so, the discovery is on the way to examine each individual based on a health profile as well as every nation in the world to find out, what must be carried out on an individual, national, and international level to increase the health rank? The aim of this hypothesis is to bring to your attention and all of the WHO directors and specialist to ask" whether the health of individuals has a mathematical code or not?" If so, the new view must be considered in regard with the health of the world population, which will be discussed in this hypothesis.

  7. HD Photo: a new image coding technology for digital photography

    Science.gov (United States)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  8. FRINK - A Code to Evaluate Space Reactor Transients

    Science.gov (United States)

    Poston, David I.; Dixon, David D.; Marcille, Thomas F.; Amiri, Benjamin W.

    2007-01-01

    One of the biggest needs for space reactor design and development is detailed system modeling. Most proposed space fission systems are very different from previously operated fission power systems, and extensive testing and modeling will be required to demonstrate integrated system performance. There are also some aspects of space reactors that make them unique from most terrestrial application, and require different modeling approaches. The Fission Reactor Integrated Nuclear Kinetics (FRINK) code was developed to evaluate simplified space reactor transients (note: the term ``space reactor'' inherently includes planetary and lunar surface reactors). FRINK is an integrated point kinetic/thermal-hydraulic transient analysis FORTRAN code - ``integrated'' refers to the simultaneous solution of the thermal and neutronic equations. In its current state FRINK is a very simple system model, perhaps better referred to as a reactor model. The ``system'' only extends to the primary loop power removal boundary condition; however this allows the simulation of simplified transients (e.g. loss of primary heat sink, loss of flow, large reactivity insertion, etc.), which are most important in bounding early system conceptual design. FRINK could then be added to a complete system model later in the design and development process as system design matures.

  9. A unified form of exact-MSR codes via product-matrix frameworks

    KAUST Repository

    Lin, Sian Jheng

    2015-02-01

    Regenerating codes represent a class of block codes applicable for distributed storage systems. The [n, k, d] regenerating code has data recovery capability while possessing arbitrary k out of n code fragments, and supports the capability for code fragment regeneration through the use of other arbitrary d fragments, for k ≤ d ≤ n - 1. Minimum storage regenerating (MSR) codes are a subset of regenerating codes containing the minimal size of each code fragment. The first explicit construction of MSR codes that can perform exact regeneration (named exact-MSR codes) for d ≥ 2k - 2 has been presented via a product-matrix framework. This paper addresses some of the practical issues on the construction of exact-MSR codes. The major contributions of this paper include as follows. A new product-matrix framework is proposed to directly include all feasible exact-MSR codes for d ≥ 2k - 2. The mechanism for a systematic version of exact-MSR code is proposed to minimize the computational complexities for the process of message-symbol remapping. Two practical forms of encoding matrices are presented to reduce the size of the finite field.

  10. A Mathematical Approach to the Study of the United States Code

    CERN Document Server

    Bommarito, Michael J

    2010-01-01

    The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findin...

  11. A Coded Bit-Loading Linear Precoded Discrete Multitone Solution for Power Line Communication

    CERN Document Server

    Muhammad, Fahad Syed; Hélard, Jean-François; Crussière, Matthieu

    2008-01-01

    Linear precoded discrete multitone modulation (LP-DMT) system has been already proved advantageous with adaptive resource allocation algorithm in a power line communication (PLC) context. In this paper, we investigate the bit and energy allocation algorithm of an adaptive LP-DMT system taking into account the channel coding scheme. A coded adaptive LP-DMT system is presented in the PLC context with a loading algorithm which ccommodates the channel coding gains in bit and energy calculations. The performance of a concatenated channel coding scheme, consisting of an inner Wei's 4-dimensional 16-states trellis code and an outer Reed-Solomon code, in combination with the roposed algorithm is analyzed. Simulation results are presented for a fixed target bit error rate in a multicarrier scenario under power spectral density constraint. Using a multipath model of PLC channel, it is shown that the proposed coded adaptive LP-DMT system performs better than classical coded discrete multitone.

  12. Equilibrium and stability code for a diffuse plasma.

    Science.gov (United States)

    Betancourt, O; Garabedian, P

    1976-04-01

    A computer code to investigate the equilibrium and stability of a diffuse plasma in three dimensions is described that generalizes earlier work on a sharp free boundary model. Toroidal equilibria of a plasma are determined by considering paths of steepest descent associated with a new version of the variational principle of magnetohydrodynamics that involves mapping a fixed coordinate domain onto the plasma. A discrete approximation of the potential energy is written down following the finite element method, and the resulting expression is minimized with respect to the values of the mapping at points of a rectangular grid. If a relative minimum of the discrete analogue of the energy is attained, the corresponding equilibrium is considered to be stable.

  13. 39 CFR Appendix A to Part 3000 - Code of Ethics For Government Service

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Code of Ethics For Government Service A Appendix A.... A Appendix A to Part 3000—Code of Ethics For Government Service Resolved by the House of Representatives (the Senate concurring), That it is the sense of the Congress that the following Code of...

  14. Reasoning with Computer Code: a new Mathematical Logic

    Science.gov (United States)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  15. Radiology coding, reimbursement, and economics: a practical playbook for housestaff.

    Science.gov (United States)

    Petrey, W Banks; Allen, Bibb; Thorwarth, William T

    2009-09-01

    As radiologists-in-training, residents and fellows have little time to devote to understanding the complex and often confusing world of reimbursement and radiology economics. At best, housestaff are afforded only a modicum of exposure to the economics of medicine. Although most training programs try to provide some information on the subject, between learning radiology, taking call, and juggling life outside the hospital, the majority of residents and fellows have little time or energy to learn about the economics of radiology. Furthermore, information on medical economics and radiology has only occasionally been directed specifically to housestaff or widely distributed to residents across the country. This is unfortunate because the reimbursement and economic arena will significantly affect daily practice, relationships with other specialties, and compensation. In this article, the authors briefly describe the current reimbursement and economic climate: how we got here and where we may be headed, with specific attention to coding for radiologic services. In addition, and perhaps more important, the authors highlight aspects of residents' or fellows' daily practice that may have the potential to affect reimbursement in their years of practice ahead, such as proper dictation and coding techniques, the importance of adhering to new reporting guidelines, and the need for increased radiologist involvement in professional and community activities. The authors also emphasize measures that can be taken, specifically by housestaff, to promote and preserve the image of our specialty, which ultimately is intertwined with the reimbursement and economics of our field.

  16. A coded VEP method to measure interhemispheric transfer time (IHTT).

    Science.gov (United States)

    Li, Yun; Bin, Guangyu; Hong, Bo; Gao, Xiaorong

    2010-03-19

    Interhemispheric transfer time (IHTT) is an important parameter for research on the information conduction time across the corpus callosum between the two hemispheres. There are several traditional methods used to estimate the IHTT, including the reaction time (RT) method, the evoked potential (EP) method and the measure based on the transcranial magnetic stimulation (TMS). The present study proposes a novel coded VEP method to estimate the IHTT based on the specific properties of the m-sequence. These properties include good signal-to-noise ratio (SNR) and high noise tolerance. Additionally, calculation of the circular cross-correlation function is sensitive to the phase difference. The method presented in this paper estimates the IHTT using the m-sequence to encode the visual stimulus and also compares the results with the traditional flash VEP method. Furthermore, with the phase difference of the two responses calculated using the circular cross-correlation technique, the coded VEP method could obtain IHTT results, which does not require the selection of the utilized component.

  17. A color-coded vision scheme for robotics

    Science.gov (United States)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  18. A new image coding technique with low entropy using a flexible zerotree

    OpenAIRE

    Joo, Sanghyun; Kikuchi, Hisakazu; Sasaki, Shigenobu; Shin, Jaeho; 菊池, 久和; 佐々木, 重信

    1998-01-01

    A zerotree image-coding scheme is introduced that effectively exploits the inter-scale self-similarities found in the octave decomposition by a wavelet transform. A zerotree is usuful for efficiently coding wavelet coefficients ; its efficiency was proved by Shapiro's EZW. In the EZW coder, wavelet coefficients are sympolized, then entropy-coded for further compression. In this paper, we analyze the symbols produced by the EZW coder and discuss the entropy for a symbol. We modify the procedur...

  19. A comparative study of MONTEBURNS and MCNPX 2.6.0 codes in ADS simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Graiciany P.; Pereira, Claubia; Veloso, Maria A.F.; Velasquez, Carlos E.; Costa, Antonella L., E-mail: gbarros@ufmg.br, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear

    2013-07-01

    The possible use of the MONTEBURNS and MCNPX 2.6.0 codes in Accelerator-driven systems (ADSs) simulations for fuel evolution description is discussed. ADSs are investigated for fuel breeding and long-lived fission product transmutation so simulations of fuel evolution have a great relevance. The burnup/depletion capability is present in both studied codes. MONTEBURNS code links Monte Carlo N-Particle Transport Code (MCNP) to the radioactive decay burnup code ORIGEN2, whereas MCNPX depletion/ burnup capability is a linked process involving steady-state flux calculations by MCNPX and nuclide depletion calculations by CINDER90. A lead-cooled accelerator-driven system fueled with thorium was simulated and the results obtained using MONTEBURNS code and the results from MCNPX 2.6.0 code were compared. The system criticality and the variation of the actinide inventory during the burnup were evaluated and the results indicate a similar behavior between the results of each code. (author)

  20. A New Three-Dimensional Code for Simulation of Ion Beam Extraction: Ion Optics Simulator

    Institute of Scientific and Technical Information of China (English)

    JIN Dazhi; HUANG Tao; HU Quan; YANG Zhonghai

    2008-01-01

    A new thee-dimensional code, ion optics simulator (IOS), to simulate ion beam extraction is developed in visual C++ language. The theoretical model, the flowchart of code, and the results of calculation as an example are presented.

  1. A Pragmatic Study on Code-switching in Chinese Network Chat

    Institute of Scientific and Technical Information of China (English)

    周薇

    2016-01-01

    Under the theoretical guideline of politeness principle, this paper will give a detailed analysis of code-switching with typical examples. It aims to offer things for further code-switching study from the perspective of pragmatics.

  2. A first look at ARFome: dual-coding genes in mammalian genomes.

    Directory of Open Access Journals (Sweden)

    Wen-Yu Chung

    2007-05-01

    Full Text Available Coding of multiple proteins by overlapping reading frames is not a feature one would associate with eukaryotic genes. Indeed, codependency between codons of overlapping protein-coding regions imposes a unique set of evolutionary constraints, making it a costly arrangement. Yet in cases of tightly coexpressed interacting proteins, dual coding may be advantageous. Here we show that although dual coding is nearly impossible by chance, a number of human transcripts contain overlapping coding regions. Using newly developed statistical techniques, we identified 40 candidate genes with evolutionarily conserved overlapping coding regions. Because our approach is conservative, we expect mammals to possess more dual-coding genes. Our results emphasize that the skepticism surrounding eukaryotic dual coding is unwarranted: rather than being artifacts, overlapping reading frames are often hallmarks of fascinating biology.

  3. A modified tree code: Don't laugh; It runs

    Science.gov (United States)

    Barnes, Joshua E.

    1990-03-01

    I describe a modification of the Barnes-Hut tree algorithm together with a series of numerical tests of this method. The basic idea is to improve the performance of the code on heavily vector-oriented machines such as the Cyber 205 by exploiting the fact that nearby particles tend to have very similar interaction lists. By building an interaction list good everywhere within a cell containing a modest number of particles and reusing this interaction list for each particle in the cell in turn, the balance of computation can be shifted from recursive descent to force summation. Instead of vectorizing tree descent, this scheme simply avoids it in favor of force summation, which is quite easy to vectorize. A welcome side-effect of this modification is that the force calculation, which now treats a larger fraction of the local interactions exactly, is significantly more accurate than the unmodified method.

  4. The Penal Code (Amendment) Act 1989 (Act A727), 1989.

    Science.gov (United States)

    1989-01-01

    In 1989, Malaysia amended its penal code to provide that inducing an abortion is not an offense if the procedure is performed by a registered medical practitioner who has determined that continuation of the pregnancy would risk the life of the woman or damage her mental or physical health. Additional amendments include a legal description of the conditions which constitute the act of rape. Among these conditions is intercourse with or without consent with a woman under the age of 16. Malaysia fails to recognize rape within a marriage unless the woman is protected from her husband by judicial decree or is living separately from her husband according to Muslim custom. Rape is punishable by imprisonment for a term of 5-20 years and by whipping.

  5. A high-speed BCI based on code modulation VEP

    Science.gov (United States)

    Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai

    2011-04-01

    Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.

  6. Development of Teaching Materials for a Physical Chemistry Experiment Using the QR Code

    OpenAIRE

    吉村, 忠与志

    2008-01-01

    The development of teaching materials with the QR code was attempted in an educational environment using a mobile telephone. The QR code is not sufficiently utilized in education, and the current study is one of the first in the field. The QR code is encrypted. However, the QR code can be deciphered by mobile telephones, thus enabling the expression of text in a small space.Contents of "Physical Chemistry Experiment" which are available on the Internet are briefly summarized and simplified. T...

  7. Motivations of Code-switching among People of Different English Profi-ciency:A Sociolinguistics Survey

    Institute of Scientific and Technical Information of China (English)

    GUAN Hui

    2015-01-01

    Code-switching is a linguistic behavior that arises as a result of languages coming into contact. The idea of code-switching was proposed since the 1970s and has been heatedly discussed. This study will particularly focus on the motivations for code-switching on campus, especially for the reason of college students and teachers as frequent users. The study aims to find out if there is any relevance between one’s English proficiency and motivation for code-switching.

  8. Error threshold for the surface code in a superohmic environment

    Science.gov (United States)

    Lopez-Delgado, Daniel A.; Novais, E.; Mucciolo, Eduardo R.; Caldeira, Amir O.

    Using the Keldysh formalism, we study the fidelity of a quantum memory over multiple quantum error correction cycles when the physical qubits interact with a bosonic bath at zero temperature. For encoding, we employ the surface code, which has one of the highest error thresholds in the case of stochastic and uncorrelated errors. The time evolution of the fidelity of the resulting two-dimensional system is cast into a statistical mechanics phase transition problem on a three-dimensional spin lattice, and the error threshold is determined by the critical temperature of the spin model. For superohmic baths, we find that time does not affect the error threshold: its value is the same for one or an arbitrary number of quantum error correction cycles. Financial support Fapesp, and CNPq (Brazil).

  9. Developing a code of ethics for human cloning.

    Science.gov (United States)

    Collmann, J; Graber, G

    2000-01-01

    Under what conditions might the cloning of human beings constitute an ethical practice? A tendency exists to analyze human cloning merely as a technical procedure. As with all revolutionary technological developments, however, human cloning potentially exists in a broad social context that will both shape and be shaped by the biological techniques. Although human cloning must be subjected to technical analysis that addresses fundamental ethical questions such as its safety and efficacy, questions exist that focus our attention on broader issues. Asserting that cloning inevitably leads to undesirable consequences commits the fallacy of technological determinism and untenably separates technological and ethical evaluation. Drawing from the Report of the National Bioethics Advisory Committee and Aldous Huxley's Brave New World, we offer a draft "Code of Ethics for Human Cloning" in order to stimulate discussion about the ethics of the broader ramifications of human cloning as well as its particular technological properties.

  10. Regulations and Ethical Considerations for Astronomy Education Research III: A Suggested Code of Ethics

    Science.gov (United States)

    Brogt, Erik; Foster, Tom; Dokter, Erin; Buxner, Sanlyn; Antonellis, Jessie

    2009-01-01

    We present an argument for, and suggested implementation of, a code of ethics for the astronomy education research community. This code of ethics is based on legal and ethical considerations set forth by U.S. federal regulations and the existing code of conduct of the American Educational Research Association. We also provide a fictitious research…

  11. An object-oriented scripting interface to a legacy electronic structure code

    DEFF Research Database (Denmark)

    Bahn, Sune Rastad; Jacobsen, Karsten Wedel

    2002-01-01

    The authors have created an object-oriented scripting interface to a mature density functional theory code. The interface gives users a high-level, flexible handle on the code without rewriting the underlying number-crunching code. The authors also discuss design issues and the advantages...

  12. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error pat...

  13. A watermarking scheme for High Efficiency Video Coding (HEVC).

    Science.gov (United States)

    Swati, Salahuddin; Hayat, Khizar; Shahid, Zafar

    2014-01-01

    This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate.

  14. Visualization of elastic wavefields computed with a finite difference code

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, S. [Lawrence Livermore National Lab., CA (United States); Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  15. Djehuty, a Code for Modeling Stars in Three Dimensions

    CERN Document Server

    Bazán, G; Dossa, D D; Eggleton, P P; Taylor, A; Castor, J I; Murray, S; Cook, K H; Eltgroth, P G; Cavallo, R M; Turcotte, S; Keller, S C; Pudliner, B S

    2003-01-01

    Current practice in stellar evolution is to employ one-dimensional calculations that quantitatively apply only to a minority of the observed stars (single non-rotating stars, or well detached binaries). Even in these systems, astrophysicists are dependent on approximations to handle complex three-dimensional processes like convection. Understanding the structure of binary stars, like those that lead to the Type Ia supernovae used to measure the expansion of the universe, are grossly non-spherical and await a 3D treatment. To approach very large problems like multi-dimensional modeling of stars, the Lawrence Livermore National Laboratory has invested in massively parallel computers and invested even more in developing the algorithms to utilize them on complex physics problems. We have leveraged skills from across the lab to develop a 3D stellar evolution code, Djehuty (after the Egyptian god for writing and calculation) that operates efficiently on platforms with thousands of nodes, with the best available phy...

  16. A revisit to the GNSS-R code range precision

    CERN Document Server

    Germain, O

    2006-01-01

    We address the feasibility of a GNSS-R code-altimetry space mission and more specifically a dominant term of its error budget: the reflected-signal range precision. This is the RMS error on the reflected-signal delay, as estimated by waveform retracking. So far, the approach proposed by [Lowe et al., 2002] has been the state of the art to theoretically evaluate this precision, although known to rely on strong assumptions (e.g., no speckle noise). In this paper, we perform a critical review of this model and propose an improvement based on the Cramer-Rao Bound (CRB) approach. We derive closed-form expressions for both the direct and reflected signals. The performance predicted by CRB analysis is about four times worse for typical space mission scenarios. The impact of this result is discussed in the context of two classes of GNSS-R applications: mesoscale oceanography and tsunami detection.

  17. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  18. The source coding game with a cheating switcher

    CERN Document Server

    Palaiyanur, Hari; Sahai, Anant

    2007-01-01

    Berger's paper `The Source Coding Game', IEEE Trans. Inform. Theory, 1971, considers the problem of finding the rate-distortion function for an adversarial source comprised of multiple known IID sources. The adversary, called the `switcher', was allowed only causal access to the source realizations and the rate-distortion function was obtained through the use of a type covering lemma. In this paper, the rate-distortion function of the adversarial source is described, under the assumption that the switcher has non-causal access to all source realizations. The proof utilizes the type covering lemma and simple conditional, random `switching' rules. The rate-distortion function is once again the maximization of the R(D) function for a region of attainable IID distributions.

  19. 76 FR 57795 - Agency Request for Renewal of a Previously Approved Collection; Disclosure of Code Sharing...

    Science.gov (United States)

    2011-09-16

    ... Code Sharing Arrangements and Long-Term Wet Leases AGENCY: Office of the Secretary. ACTION: Notice and... 20590. SUPPLEMENTARY INFORMATION: OMB Control Number: 2105-0537. Title: Disclosure of Code Sharing... between cooperating carriers, at least one of the airline designator codes used on a flight is...

  20. Concealed holographic coding for security applications by using a moire technique

    DEFF Research Database (Denmark)

    Zhang, Xiangsu; Dalsgaard, Erik

    1997-01-01

    We present an optical coding technique that enhances the anticounterfeiting power of security holograms. The principles of the technique is based on the moire phenomenon. The code in the hologram has a phase pattern that is invisible and cannot be detected by optical equipment, so that imitation...... is extremely difficult. Holographic, photographic and embossing technique are used in fabricating coded holograms and decoders....

  1. Code-Switching in English as a Foreign Language Classroom: Teachers' Attitudes

    Science.gov (United States)

    Ibrahim, Engku Haliza Engku; Shah, Mohamed Ismail Ahamad; Armia, Najwa Tgk.

    2013-01-01

    Code-switching has always been an intriguing phenomenon to sociolinguists. While the general attitude to it seems negative, people seem to code-switch quite frequently. Teachers of English as a foreign language too frequently claim that they do not like to code-switch in the language classroom for various reasons--many are of the opinion that only…

  2. MUXS: a code to generate multigroup cross sections for sputtering calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1982-10-01

    This report documents MUXS, a computer code to generate multigroup cross sections for charged particle transport problems. Cross sections generated by MUXS can be used in many multigroup transport codes, with minor modifications to these codes, to calculate sputtering yields, reflection coefficients, penetration distances, etc.

  3. A simple model of optimal population coding for sensory systems.

    Science.gov (United States)

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  4. Analysis and design of Raptor codes using a multi-edge framework

    OpenAIRE

    Jayasooriya, Sachini; Shirvanimoghaddam, Mahyar; Ong, Lawrence; Johnson, Sarah J.

    2017-01-01

    The focus of this paper is on the analysis and design of Raptor codes using a multi-edge framework. In this regard, we first represent the Raptor code as a multi-edge type low-density parity-check (METLDPC) code. This MET representation gives a general framework to analyze and design Raptor codes over a binary input additive white Gaussian noise channel using MET density evolution (MET-DE). We consider a joint decoding scheme based on the belief propagation (BP) decoding for Raptor codes in t...

  5. Assessment of MARMOT. A Mesoscale Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Tonks, M. R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Y. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, P. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, X. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fromm, B. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yu, J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Teague, M. C. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andersson, D. A. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    MARMOT is the mesoscale fuel performance code under development as part of the US DOE Nuclear Energy Advanced Modeling and Simulation Program. In this report, we provide a high level summary of MARMOT, its capabilities, and its current state of validation. The purpose of MARMOT is to predict the coevolution of microstructure and material properties of nuclear fuel and cladding. It accomplished this using the phase field method coupled to solid mechanics and heat conduction. MARMOT is based on the Multiphysics Object-Oriented Simulation Environment (MOOSE), and much of its basic capability in the areas of the phase field method, mechanics, and heat conduction come directly from MOOSE modules. However, additional capability specific to fuel and cladding is available in MARMOT. While some validation of MARMOT has been completed in the areas of fission gas behavior and grain growth, much more validation needs to be conducted. However, new mesoscale data needs to be obtained in order to complete this validation.

  6. A fast block mode selection approach for H. Visual coding

    Institute of Scientific and Technical Information of China (English)

    LIN Wei-yao; Fang Xiang-zhong; HUANG Xiu-chao; LI Dian; LIU Xiao-feng

    2006-01-01

    In this paper,a new fast mode-selection approach is proposed.This algorithm combines the proposed approaches of mode pre-decision and precise large-small mode decision,by selecting the best mode efficiently.Experimental results show that the proposed approach can reduce the computational cost of full search and fast multi-block motion estimation by 8% and 45%,respectively,with similar visual quality and bit rate.The proposed algorithm also reduces by 75% the computational cost of the large-small mode isolation algorithm for low-motion sequence coding,and with 0.06 PSNR gain and 3.7% reduction in bit rate.

  7. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  8. Seeing relativity -- I. Basics of a raytracing code in a Schwarzschild metric

    CERN Document Server

    Riazuelo, Alain

    2015-01-01

    We present here an implementation of a raytracing code in the Schwarzschild metric. We aim at building a numerical code with a correct implementation of both special (aberration, amplification, Doppler) and general (deflection of light, lensing, gravitational redshift) relativistic effects by paying attention to a good rendering of stars

  9. A large scale code resolution service network in the Internet of Things.

    Science.gov (United States)

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  10. A UEP LT Codes Design with Feedback for Underwater Communication

    Directory of Open Access Journals (Sweden)

    Danfeng Zhao

    2016-01-01

    Full Text Available To satisfy the performance requirement of LT codes with Unequal Erasure Protection (UEP in underwater environment, the Weighted Expanding Window Fountain (WEWF code is proposed in this paper. The WEWF codes can achieve strong UEP property by nonuniformly selecting input symbols within each window. To overcome the disadvantages in terms of redundancy in the lower prioritized segments, Correlation Chain Feedback (CCFB is also introduced to help the transmitter to precisely adjust the encoding scheme. Asymptotic analysis and simulation results demonstrate that the proposed approach can achieve lower symbol error rate and less overall redundancy in the underwater acoustic sensor networks.

  11. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    Science.gov (United States)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and

  12. Space Time Codes from Permutation Codes

    CERN Document Server

    Henkel, Oliver

    2006-01-01

    A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.

  13. Assessement of Codes and Standards Applicable to a Hydrogen Production Plant Coupled to a Nuclear Reactor

    Energy Technology Data Exchange (ETDEWEB)

    M. J. Russell

    2006-06-01

    This is an assessment of codes and standards applicable to a hydrogen production plant to be coupled to a nuclear reactor. The result of the assessment is a list of codes and standards that are expected to be applicable to the plant during its design and construction.

  14. The Evolution of a Coding Schema in a Paced Program of Research

    Science.gov (United States)

    Winters, Charlene A.; Cudney, Shirley; Sullivan, Therese

    2010-01-01

    A major task involved in the management, analysis, and integration of qualitative data is the development of a coding schema to facilitate the analytic process. Described in this paper is the evolution of a coding schema that was used in the analysis of qualitative data generated from online forums of middle-aged women with chronic conditions who…

  15. Folklore in bureaucracy code: Running a music event

    Directory of Open Access Journals (Sweden)

    Krstanović-Lukić Miroslava

    2004-01-01

    Full Text Available A music folk-created piece of work is a construction expressed as a paradigm part of a set in the bureaucracy system and the public arena. Such a work is a mechanical concept, which defines inheritance as a construction of authenticity saturated with elements of folk, national culture. It is also a subject of certain conventions in the system of regulations; namely, it is a part of the administrative code. The usage of the folk created work as a paradigm and legislations is realized through an organizational apparatus that is, it becomes entertainment, a spectacle. This paper analyzes the functioning of the organizational machinery of a folk spectacle, starting with the government authorities, local self-management and the spectacle's administrative committees. To illustrate this phenomenon, the paper presents the development of a trumpet playing festival in Dragačevo. This particular festival establishes a cultural, economic and political order with a clear and defined division of power. The analysis shows that the folk event in question, through its programs and activities, represents a scene and arena of individual and group interests. Organizational interactions are recognized in binary oppositions: sovereignty/dependency official/unofficial, dominancy/ subordination, innovative/inherited common/different, needed/useful, original/copy, one's own/belonging to someone else.

  16. Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

    Institute of Scientific and Technical Information of China (English)

    YU Deliang; YAN Longwen; ZHONG Guangwu; LU Jie; YI Ping

    2007-01-01

    A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport.The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas.The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code,which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks.The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

  17. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  18. JSPAM: A restricted three-body code for simulating interacting galaxies

    Science.gov (United States)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  19. JSPAM: A restricted three-body code for simulating interacting galaxies

    CERN Document Server

    Wallin, John; Harvey, Allen

    2015-01-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License (AFL) v.3.0 and has been added to the Astrophysics Source Code Library.

  20. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  1. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    Science.gov (United States)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  2. A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing

    Directory of Open Access Journals (Sweden)

    Cunbo Lu

    2015-08-01

    Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.

  3. A Joint-Coding Scheme With Crosstalk Avoidance in Network On Chip

    Directory of Open Access Journals (Sweden)

    Fen Ge

    2013-01-01

    Full Text Available The reliable transfer in Network on Chip can be guaranteed by crosstalk avoidance and error detection code. In this paper,we propose a joint coding scheme combined with crosstalk avoidance coding with error control coding. The Fibonacci numeral system is applied to satisfy the requirement of crosstalk avoidance coding, and the error detection is achieved by adding parity bits. We also implement the codec in register transfer level. Furthermore, the schemes of codec applying to fault-tolerant router are analyzed. The experimental result shows that "once encode, multiple decode" scheme outperforms other schemes in trade-o_ of delay, area and power.

  4. Modelling of sprays in containment applications with A CMFD code

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.f [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Lamy, J.-S. [Electricite de France R and D Division, 1 av. du General de Gaulle, F-92140 Clamart (France); Lavieville, J. [Electricite de France R and D Division, 6 Quai Watier, F-78400 Chatou (France); Guieu, S.; Martin, M. [Electricite de France SEPTEN Division, 12-14 av. Dutrievoz, 69628 Villeurbanne (France)

    2010-09-15

    During the course of a hypothetical severe accident in a Pressurized Water Reactor (PWR), spray systems are used in the containment in order to prevent overpressure in case of a steam line break, and to enhance the gas mixing in case of the presence of hydrogen. In the frame of the Severe Accident Research Network (SARNET) of the 6th EC Framework Programme, two tests was produced in the TOSQAN facility in order to study the spray behaviour under severe accident conditions: TOSQAN 101 and TOSQAN 113. The TOSQAN facility is a closed cylindrical vessel. The inner spray system is located on the top of the enclosure on the vertical axis. For the TOSQAN 101 case, an initial pressurization in the vessel is performed with superheated steam up to 2.5 bar. Then, steam injection is stopped and spraying starts simultaneously at a given water temperature (around 25 {sup o}C) and water mass flow-rate (around 30 g/s). The depressurization transient starts and continues until the equilibrium phase, which corresponds to the stabilization of the average temperature and pressure of the gaseous mixture inside the vessel. The purpose of the TOSQAN 113 cold spray test is to study helium mixing due to spray activation without heat and mass transfers between gas and droplets. We present in this paper the spray modelling implemented in NEPTUNE{sub C}FD, a three-dimensional multi-fluid code developed especially for nuclear reactor applications. A new model dedicated to the droplet evaporation at the wall is also detailed. Keeping in mind the Best Practice Guidelines, closure laws have been selected to ensure a grid-dependence as weak as possible. For the TOSQAN 113 case, the time evolution of the helium volume fraction calculated shows that the physical approach described in the paper is able to reproduce the mixing of helium by the spray. The prediction of the transient behaviour should be improved by including in the model corrections based on better understanding of the influence of the

  5. A CMOS Imager with Focal Plane Compression using Predictive Coding

    Science.gov (United States)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  6. Modeling Vortex Generators in a Navier-Stokes Code

    Science.gov (United States)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  7. On the performance of a 2D unstructured computational rheology code on a GPU

    NARCIS (Netherlands)

    Pereira, S.P.; Vuik, K.; Pinho, F.T.; Nobrega, J.M.

    2013-01-01

    The present work explores the massively parallel capabilities of the most advanced architecture of graphics processing units (GPUs) code named “Fermi”, on a two-dimensional unstructured cell-centred finite volume code. We use the SIMPLE algorithm to solve the continuity and momentum equations that w

  8. Application of a Two-dimensional Unsteady Viscous Analysis Code to a Supersonic Throughflow Fan Stage

    Science.gov (United States)

    Steinke, Ronald J.

    1989-01-01

    The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.

  9. Rewriting the Epigenetic Code for Tumor Resensitization: A Review

    Directory of Open Access Journals (Sweden)

    Bryan Oronsky

    2014-10-01

    Full Text Available In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis and DNA methyltransferase inhibitors (DNMTIs, provides oncologists, like computer programmers, with new techniques to “overwrite” the modifiable software pattern of gene expression in tumors and challenge the “one and done” treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a “system restore” or “undo” feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1 to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat, DNMTIs (decitabine and 5-azacytidine, and redox modulators (RRx-001 may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2 highlight some recent findings, and 3 discuss whether the current “once burned forever spurned” paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies.

  10. Rewriting the epigenetic code for tumor resensitization: a review.

    Science.gov (United States)

    Oronsky, Bryan; Oronsky, Neil; Scicinski, Jan; Fanger, Gary; Lybeck, Michelle; Reid, Tony

    2014-10-01

    In cancer chemotherapy, one axiom, which has practically solidified into dogma, is that acquired resistance to antitumor agents or regimens, nearly inevitable in all patients with metastatic disease, remains unalterable and irreversible, rendering therapeutic rechallenge futile. However, the introduction of epigenetic therapies, including histone deacetylase inhibitors (HDACis) and DNA methyltransferase inhibitors (DNMTIs), provides oncologists, like computer programmers, with new techniques to "overwrite" the modifiable software pattern of gene expression in tumors and challenge the "one and done" treatment prescription. Taking the epigenetic code-as-software analogy a step further, if chemoresistance is the product of multiple nongenetic alterations, which develop and accumulate over time in response to treatment, then the possibility to hack or tweak the operating system and fall back on a "system restore" or "undo" feature, like the arrow icon in the Windows XP toolbar, reconfiguring the tumor to its baseline nonresistant state, holds tremendous promise for turning advanced, metastatic cancer from a fatal disease into a chronic, livable condition. This review aims 1) to explore the potential mechanisms by which a group of small molecule agents including HDACis (entinostat and vorinostat), DNMTIs (decitabine and 5-azacytidine), and redox modulators (RRx-001) may reprogram the tumor microenvironment from a refractory to a nonrefractory state, 2) highlight some recent findings, and 3) discuss whether the current "once burned forever spurned" paradigm in the treatment of metastatic disease should be revised to promote active resensitization attempts with formerly failed chemotherapies.

  11. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  12. Financial and clinical governance implications of clinical coding accuracy in neurosurgery: a multidisciplinary audit.

    Science.gov (United States)

    Haliasos, N; Rezajooi, K; O'neill, K S; Van Dellen, J; Hudovsky, Anita; Nouraei, Sar

    2010-04-01

    Clinical coding is the translation of documented clinical activities during an admission to a codified language. Healthcare Resource Groupings (HRGs) are derived from coding data and are used to calculate payment to hospitals in England, Wales and Scotland and to conduct national audit and benchmarking exercises. Coding is an error-prone process and an understanding of its accuracy within neurosurgery is critical for financial, organizational and clinical governance purposes. We undertook a multidisciplinary audit of neurosurgical clinical coding accuracy. Neurosurgeons trained in coding assessed the accuracy of 386 patient episodes. Where clinicians felt a coding error was present, the case was discussed with an experienced clinical coder. Concordance between the initial coder-only clinical coding and the final clinician-coder multidisciplinary coding was assessed. At least one coding error occurred in 71/386 patients (18.4%). There were 36 diagnosis and 93 procedure errors and in 40 cases, the initial HRG changed (10.4%). Financially, this translated to pound111 revenue-loss per patient episode and projected to pound171,452 of annual loss to the department. 85% of all coding errors were due to accumulation of coding changes that occurred only once in the whole data set. Neurosurgical clinical coding is error-prone. This is financially disadvantageous and with the coding data being the source of comparisons within and between departments, coding inaccuracies paint a distorted picture of departmental activity and subspecialism in audit and benchmarking. Clinical engagement improves accuracy and is encouraged within a clinical governance framework.

  13. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  14. Relative efficiency calculation of a HPGe detector using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, Marcos P.C.; Rebello, Wilson F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Lopes, Jose M.; Silva, Ademir X., E-mail: marqueslopez@yahoo.com.br, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a {sup 60}Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate a HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a {sup 60}Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  15. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  16. A Software Upgrade of the NASA Aeroheating Code "MINIVER"

    Science.gov (United States)

    Louderback, Pierce Mathew

    2013-01-01

    Computational Fluid Dynamics (CFD) is a powerful and versatile tool simulating fluid and thermal environments of launch and re-entry vehicles alike. Where it excels in power and accuracy, however, it lacks in speed. An alternative tool for this purpose is known as MINIVER, an aeroheating code widely used by NASA and within the aerospace industry. Capable of providing swift, reasonably accurate approximations of the fluid and thermal environment of launch vehicles, MINIVER is used where time is of the essence and accuracy need not be exact. However, MINIVER is an old, aging tool: running on a user-unfriendly, legacy command-line interface, it is difficult for it to keep pace with more modem software tools. Florida Institute of Technology was tasked with the construction of a new Graphical User Interface (GUI) that implemented the legacy version's capabilities and enhanced them with new tools and utilities. This thesis provides background to the legacy version of the program, the progression and final version of a modem user interface, and benchmarks to demonstrate its usefulness.

  17. Physics under the bonnet of a stellar evolution code

    Science.gov (United States)

    Stancliffe, Richard J.

    Just how good are modern stellar models? Providing a rigorous assessment of the uncertainties is difficult because of the multiplicity of input physics. Some of the ingredients are reasonably well-known (like reaction rates and opacities). Others are not so good, with convection standing out as a particularly obvious example. In some cases, it is not clear what the ingredients should be: what role do atomic diffusion, rotation, magnetic fields, etc. play in stellar evolution? All this is then compounded by computational method. In converting all this physics into something we can implement in a 1D evolution code, we are forced to make choices about the way the equations are solved, how we will treat mixing at convective boundaries, etc. All of this can impact the models one finally generates. In this review, I will attempt to assess the uncertainties associated with the ingredients and methods used by stellar evolution modellers, and what their impacts may be on the science that we wish to do.

  18. A Secure Code-Based Authentication Scheme for RFID Systems

    Directory of Open Access Journals (Sweden)

    Noureddine Chikouche

    2015-08-01

    Full Text Available Two essential problems are still posed in terms of Radio Frequency Identification (RFID systems, including: security and limitation of resources. Recently, Li et al.'s proposed a mutual authentication scheme for RFID systems in 2014, it is based on Quasi Cyclic-Moderate Density Parity Check (QC-MDPC McEliece cryptosystem. This cryptosystem is designed to reducing the key sizes. In this paper, we found that this scheme does not provide untraceability and forward secrecy properties. Furthermore, we propose an improved version of this scheme to eliminate existing vulnerabilities of studied scheme. It is based on the QC-MDPC McEliece cryptosystem with padding the plaintext by a random bit-string. Our work also includes a security comparison between our improved scheme and different code-based RFID authentication schemes. We prove secrecy and mutual authentication properties by AVISPA (Automated Validation of Internet Security Protocols and Applications tools. Concerning the performance, our scheme is suitable for low-cost tags with resource limitation.

  19. The source coding game with a cheating switcher

    CERN Document Server

    Palaiyanur, Hari; Sahai, Anant

    2007-01-01

    Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions. Berger's paper `The Source Coding Game', \\emph{IEEE Trans. Inform. Theory}, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations. We consider the case when the adversary has access to the subsource realizations non-causally. Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary. We then extend the results to allow for partial or noisy observations of subsource realizations. We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful. Finally, a bound is developed on the uniform continuity of the I...

  20. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  1. GOVERNANCE CODES: FACTS OR FICTIONS? A STUDY OF GOVERNANCE CODES IN COLOMBIA

    Directory of Open Access Journals (Sweden)

    JULIÁN BENAVIDES FRANCO

    2010-01-01

    para autorregularse, reduciendo sus beneficios privados y/o la expropiación de las partes no controladoras, a través de la introducción del código, es en realidad una medida efectiva y que los mercados financieros apoyan, incrementando el suministro de fondos a las firmas.

  2. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  3. A New Class of TAST Codes With A Simplified Tree Structure

    CERN Document Server

    Damen, Mohamed Oussama; Badr, Ahmed A

    2010-01-01

    We consider in this paper the design of full diversity and high rate space-time codes with moderate decoding complexity for arbitrary number of transmit and receive antennas and arbitrary input alphabets. We focus our attention to codes from the threaded algebraic space-time (TAST) framework since the latter includes most known full diversity space-time codes. We propose a new construction of the component single-input single-output (SISO) encoders such that the equivalent code matrix has an upper triangular form. We accomplish this task by designing each SISO encoder to create an ISI-channel in each thread. This, in turn, greatly simplifies the QR-decomposition of the composite channel and code matrix, which is essential for optimal or near-optimal tree search algorithms, such as the sequential decoder.

  4. Code optimisation in a nested-sampling algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, S.J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Ireland, D.G., E-mail: David.Ireland@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Vanderbauwhede, W. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2015-06-11

    The speed-up in program running time is investigated for problems of parameter estimation with nested sampling Monte Carlo methods. The example used in this study is to extract a polarisation observable from event-by-event data from meson photoproduction reactions. Various implementations of the basic algorithm were compared, consisting of combinations of single threaded versus multi-threaded, and CPU versus GPU versions. These were implemented in OpenMP and OpenCL. For the application under study, and with the number of events as used in our work, we find that straightforward multi-threaded CPU OpenMP coding gives the best performance; for larger numbers of events, OpenCL on the CPU performs better. The study also shows that there is a “break-even” point of the number of events where the use of GPUs helps performance. GPUs are not found to be generally helpful for this problem, due to the data transfer times, which more than offset the improvement in computation time.

  5. A HERMENEUTIC ANALYSIS OF THE NEW CIVIL PROCEDURE CODE ADVANCES

    Directory of Open Access Journals (Sweden)

    Lenio Luiz Streck

    2016-07-01

    Full Text Available I've never been unwillingly with CPC/15. Everything I wrote to criticize the procedural instrumentalism and its side effects, present until the Rapporteur, Deputy Paulo Teixeira, assumed courageously the thesis that there was something more to be treated in the Project. This plus concerned the philosophical paradigms and the need to control the judicial decisions. Anyway, I believe that some guiding principles of the new code can be drawn from the project and its complexity, such as the need to maintain the consistency and integrity of the case law (including the precedents, the prohibition of the free convincing, which implies minor role and the need to adopt the intersubjectivism paradigm, that is, the subjectivity of the judge should be suspended and controlled by structuring intersubjectivity of law. This is the holding of the new "system". Without understanding it, we run the risk of making a reverse revolution. Small-gnosiological reasoning still seated in objectivist and subjectivist paradigm (or its voluntarist vulgatas can quickly cause the downfall of a good idea.

  6. COMENTE+: A TOOL FOR IMPROVING SOURCE CODE DOCUMENTATION USING INFORMATION RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Julio Cezar Zanoni

    2014-01-01

    Full Text Available Document source code is seen as a boring time consuming task by several developers. However, a well-documented source code, allow developers to have a better visibility into what was and is being developed, helping, for example, the reuse of the code. This study presents a semi-automatic method for documentation of source code from the existing artifacts in a software project under development. The method aims to reduce developer’s workload, allowing them to work on other tasks of the project and/or ensure that the project deadlines will be met. The method, implemented in a tool, called Comente+, is capable of creating or updating comments into a source code from gathered information recovered from the project artifacts. To implement Comente+, we used an information retrieval approach. We performed some experiments with real data to validate this approach. For that, we created a special measure that estimates how well documented a source code is.

  7. Connect and immerse: a poetry of codes and signals

    Directory of Open Access Journals (Sweden)

    Jesper Olsson

    2012-06-01

    Full Text Available This article investigates how codes and signals were employed in avant-garde poetry and art in the 1960s, and how such attempts were performed in the wake of cybernetics and (partly through the use of new media technologies, such as the tape recorder and the computer. This poetry—as exemplified here by works by Åke Hodell, Peter Weibel, and Henri Chopin—not only employed new materials, media, and methods for the production of poems; it also transformed the interface of literature and the act of reading through immersion in sound, through the activation of different cognitive modes, and through an intersensorial address. On the one hand, this literary and artistic output can be seen as a response to the increasing intermedation (in Katherine Hayles's sense in culture and society during the last century. On the other hand, we might, as contemporary readers, return to these poetic works in order to use them as media archaeological tools that might shed light on the aesthetic transformations taking place within new media today.

  8. TRIPOLI-3: a neutron/photon Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees

    2001-07-01

    The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)

  9. "Source Coding With a Side Information ""Vending Machine"""

    OpenAIRE

    Weissman, Tsachy; Permuter, Haim H.

    2011-01-01

    We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commens...

  10. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    Science.gov (United States)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  11. Coded Aperture Nuclear Scintigraphy: A Novel Small Animal Imaging Technique

    Directory of Open Access Journals (Sweden)

    Dawid Schellingerhout

    2002-10-01

    Full Text Available We introduce and demonstrate the utility of coded aperture (CA nuclear scintigraphy for imaging small animals. CA imaging uses multiple pinholes in a carefully designed mask pattern, mounted on a conventional gamma camera. System performance was assessed using point sources and phantoms, while several animal experiments were performed to test the usefulness of the imaging system in vivo, with commonly used radiopharmaceuticals. The sensitivity of the CA system for 99mTc was 4.2 × 103 cps/Bq (9400 cpm/μCi, compared to 4.4 × 104 cps/Bq (990 cpm/μCi for a conventional collimator system. The system resolution was 1.7 mm, as compared to 4–6 mm for the conventional imaging system (using a high-sensitivity low-energy collimator. Animal imaging demonstrated artifact-free imaging with superior resolution and image quality compared to conventional collimator images in several mouse and rat models. We conclude that: (a CA imaging is a useful nuclear imaging technique for small animal imaging. The advantage in signal-to-noise can be traded to achieve higher resolution, decreased dose or reduced imaging time. (b CA imaging works best for images where activity is concentrated in small volumes; a low count outline may be better demonstrated using conventional collimator imaging. Thus, CA imaging should be viewed as a technique to complement rather than replace traditional nuclear imaging methods. (c CA hardware and software can be readily adapted to existing gamma cameras, making their implementation a relatively inexpensive retrofit to most systems.

  12. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  13. Joint source channel coding using arithmetic codes

    CERN Document Server

    Bi, Dongsheng

    2009-01-01

    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  14. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  15. ORNL ALICE: a statistical model computer code including fission competition. [In FORTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Plasil, F.

    1977-11-01

    A listing of the computer code ORNL ALICE is given. This code is a modified version of computer codes ALICE and OVERLAID ALICE. It allows for higher excitation energies and for a greater number of evaporated particles than the earlier versions. The angular momentum removal option was made more general and more internally consistent. Certain roundoff errors are avoided by keeping a strict accounting of partial probabilities. Several output options were added.

  16. A combinatorial code for splicing silencing: UAGG and GGGG motifs.

    Directory of Open Access Journals (Sweden)

    Kyoungha Han

    2005-05-01

    Full Text Available Alternative pre-mRNA splicing is widely used to regulate gene expression by tuning the levels of tissue-specific mRNA isoforms. Few regulatory mechanisms are understood at the level of combinatorial control despite numerous sequences, distinct from splice sites, that have been shown to play roles in splicing enhancement or silencing. Here we use molecular approaches to identify a ternary combination of exonic UAGG and 5'-splice-site-proximal GGGG motifs that functions cooperatively to silence the brain-region-specific CI cassette exon (exon 19 of the glutamate NMDA R1 receptor (GRIN1 transcript. Disruption of three components of the motif pattern converted the CI cassette into a constitutive exon, while predominant skipping was conferred when the same components were introduced, de novo, into a heterologous constitutive exon. Predominant exon silencing was directed by the motif pattern in the presence of six competing exonic splicing enhancers, and this effect was retained after systematically repositioning the two exonic UAGGs within the CI cassette. In this system, hnRNP A1 was shown to mediate silencing while hnRNP H antagonized silencing. Genome-wide computational analysis combined with RT-PCR testing showed that a class of skipped human and mouse exons can be identified by searches that preserve the sequence and spatial configuration of the UAGG and GGGG motifs. This analysis suggests that the multi-component silencing code may play an important role in the tissue-specific regulation of the CI cassette exon, and that it may serve more generally as a molecular language to allow for intricate adjustments and the coordination of splicing patterns from different genes.

  17. Consensus Coding as a Tool in Visual Appearance Research

    Directory of Open Access Journals (Sweden)

    D R Simmons

    2011-04-01

    Full Text Available A common problem in visual appearance research is how to quantitatively characterise the visual appearance of a region of an image which is categorised by human observers in the same way. An example of this is scarring in medical images (Ayoub et al, 2010, The Cleft-Palate Craniofacial Journal, in press. We have argued that “scarriness” is itself a visual appearance descriptor which summarises the distinctive combination of colour, texture and shape information which allows us to distinguish scarred from non-scarred tissue (Simmons et al, ECVP 2009. Other potential descriptors for other image classes would be “metallic”, “natural”, or “liquid”. Having developed an automatic algorithm to locate scars in medical images, we then tested “ground truth” by asking untrained observers to draw around the region of scarring. The shape and size of the scar on the image was defined by building a contour plot of the agreement between observers' outlines and thresholding at the point above which 50% of the observers agreed: a consensus coding scheme. Based on the variability in the amount of overlap between the scar as defined by the algorithm, and the consensus scar of the observers, we have concluded that the algorithm does not completely capture the putative appearance descriptor “scarriness”. A simultaneous analysis of qualitative descriptions of the scarring by the observers revealed that other image features than those encoded by the algorithm (colour and texture might be important, such as scar boundary shape. This approach to visual appearance research in medical imaging has potential applications in other application areas, such as botany, geology and archaeology.

  18. A New Family of Unitary Space-Time Codes with a Fast Parallel Sphere Decoder Algorithm

    CERN Document Server

    Chen, Xinjia; Aravena, Jorge L

    2007-01-01

    In this paper we propose a new design criterion and a new class of unitary signal constellations for differential space-time modulation for multiple-antenna systems over Rayleigh flat-fading channels with unknown fading coefficients. Extensive simulations show that the new codes have significantly better performance than existing codes. We have compared the performance of our codes with differential detection schemes using orthogonal design, Cayley differential codes, fixed-point-free group codes and product of groups and for the same bit error rate, our codes allow smaller signal to noise ratio by as much as 10 dB. The design of the new codes is accomplished in a systematic way through the optimization of a performance index that closely describes the bit error rate as a function of the signal to noise ratio. The new performance index is computationally simple and we have derived analytical expressions for its gradient with respect to constellation parameters. Decoding of the proposed constellations is reduc...

  19. Verification & Validation Toolkit to Assess Codes: Is it Theory Limitation, Numerical Method Inadequacy, Bug in the Code or a Serious Flaw?

    Science.gov (United States)

    Bombardelli, F. A.; Zamani, K.

    2014-12-01

    We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.

  20. gevolution: a cosmological N-body code based on General Relativity

    CERN Document Server

    Adamek, Julian; Durrer, Ruth; Kunz, Martin

    2016-01-01

    We present a new N-body code, gevolution, for the evolution of large scale structure in the Universe. Our code is based on a weak field expansion of General Relativity and calculates all six metric degrees of freedom in Poisson gauge. N-body particles are evolved by solving the geodesic equation which we write in terms of a canonical momentum such that it remains valid also for relativistic particles. We validate the code by considering the Schwarzschild solution and, in the Newtonian limit, by comparing with the Newtonian N-body code Gadget-2. We then proceed with a simulation of large scale structure in a Universe with massive neutrinos where we study the gravitational slip induced by the neutrino shear stress. The code can be extended to include different kinds of dark energy or modified gravity models and going beyond the usually adopted quasi-static approximation. Our code is publicly available.

  1. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    Science.gov (United States)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  2. Quantum error correcting codes and one-way quantum computing: Towards a quantum memory

    CERN Document Server

    Schlingemann, D

    2003-01-01

    For realizing a quantum memory we suggest to first encode quantum information via a quantum error correcting code and then concatenate combined decoding and re-encoding operations. This requires that the encoding and the decoding operation can be performed faster than the typical decoherence time of the underlying system. The computational model underlying the one-way quantum computer, which has been introduced by Hans Briegel and Robert Raussendorf, provides a suitable concept for a fast implementation of quantum error correcting codes. It is shown explicitly in this article is how encoding and decoding operations for stabilizer codes can be realized on a one-way quantum computer. This is based on the graph code representation for stabilizer codes, on the one hand, and the relation between cluster states and graph codes, on the other hand.

  3. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  4. Easy as Pi: A Network Coding Raspberry Pi Testbed

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Hernandez, Nestor; Cabrera Guerrero, Juan A.

    2016-01-01

    of the hardware, but also due to maintenance challenges. In this paper, we present the required key steps to design, setup and maintain an inexpensive testbed using Raspberry Pi devices for communications and storage networks with network coding capabilities. This testbed can be utilized for any applications...

  5. A Normative Code of Conduct for Admissions Officers

    Science.gov (United States)

    Hodum, Robert L.

    2012-01-01

    The increasing competition for the desired quantity and quality of college students, along with the rise of for-profit institutions, has amplified the scrutiny of behavior and ethics among college admissions professionals and has increased the need for meaningful ethical guidelines and codes of conduct. Many other areas of responsibility within…

  6. Erasure Coded Storage on a Changing Network: the Untold Story

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Venkat, Narayan; Oran, David

    2016-01-01

    As faster storage devices become commercially viable alternatives to disk drives, the network is increasingly becoming the bottleneck in achieving good performance in distributed storage systems. This is especially true for erasure coded storage, where the reconstruction of lost data can signific...

  7. Developing a Working Code of Ethics for Human Resource Personnel.

    Science.gov (United States)

    Rampal, Kuldip R.

    1991-01-01

    To develop codes of ethics for their profession, college human resources personnel must first understand their primary job-related responsibilities. These include being alert to evolving organizational needs; coordinating needed training of employees; appreciating the nuances of psychology, communication, and motivation; and observing employee…

  8. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  9. NERO a code for evaluation of nonlinear resonances in 4D symplectic mappings

    CERN Document Server

    Todesco, Ezio; Giovannozzi, Massimo

    1998-01-01

    A code to evaluate the stability, the position and the width of nonlinear resonances in four-dimensional symplectic mappings is described. NERO is based on the computation of the resonant perturbative series through the use of Lie transformation implemented in the code ARES, and on the analysis of the resonant orbits of the interpolating Hamiltonian. The code is aimed at studying the nonlinear moti on of a charged particle moving in a circular accelerator under the influence of nonlinear forces.

  10. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    Science.gov (United States)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  11. RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade

    Science.gov (United States)

    2015-09-30

    the PI of this project, and his team at AER includes programmers with experience coding for modern computer architectures, including the recent GPU ...Supercomputer Center (CSCS) in Lugano will be developing a GPU version (OpenACC) of this code for use in the ICON LES model. This version will provide a...significant foundation for the GPU version of our code that is a deliverable for this project. Andre Wehe of AER will spend the first week in November

  12. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  13. (R, S)-Norm Information Measure and A Relation Between Coding and Questionnaire Theory

    Science.gov (United States)

    Joshi, Rajesh; Kumar, Satish

    2016-10-01

    In this paper, we introduce a quantity which is called (R, S)-norm entropy and discuss some of its major properties in comparison with Shannon’s and other entropies known in the literature. Further, we give an application of (R, S)-norm entropy in coding theory and a coding theorem analogous to the ordinary coding theorem for a noiseless channel. The theorem states that the proposed entropy is the lower bound of mean code word length. Further, we give an application of (R, S)-norm entropy and noiseless coding theorem in questionnaire theory. We show that the relationship between noiseless coding theorem and questionnaire theory through a charging scheme based on the resolution of questions and lower bound on the measure of the charge can also be obtained.

  14. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  15. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  16. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  17. Clustering of neural code words revealed by a first-order phase transition

    Science.gov (United States)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  18. Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement.

    Science.gov (United States)

    Butler, Thomas; Goldenfeld, Nigel; Mathew, Damien; Luthey-Schulten, Zaida

    2009-06-01

    A molecular dynamics calculation of the amino acid polar requirement is used to score the canonical genetic code. Monte Carlo simulation shows that this computational polar requirement has been optimized by the canonical genetic code, an order of magnitude more than any previously known measure, effectively ruling out a vertical evolution dynamics. The sensitivity of the optimization to the precise metric used in code scoring is consistent with code evolution having proceeded through the communal dynamics of statistical proteins using horizontal gene transfer, as recently proposed. The extreme optimization of the genetic code therefore strongly supports the idea that the genetic code evolved from a communal state of life prior to the last universal common ancestor.

  19. A Brief Study On Uyghur Teachers’ Code-switching In English Classroom

    Institute of Scientific and Technical Information of China (English)

    古力斯旦木·哈德尔艾山

    2013-01-01

    Generally speaking,code-switching is a common phenomenon in language contact,both in bilingual and multilingual environment.It refers to circumstances in which a speaker uses two or more than two language varieties.Teacher code-switching,is a quite common occurrence in English classrooms.This study intends to investigate into Uyghur teachers’ code-switching in English classrooms.This research concludes that the teachers’ code-switching plays a complementary and facilitative part in English classrooms.It is necessary to switch from English to Chinese,English to Uyghur to explain grammar and usage of words,and classroom management.

  20. X-ray enabled MOCASSIN: a 3D code for photoionized media

    CERN Document Server

    Ercolano, Barbara; Drake, Jeremy J; Raymond, John C

    2007-01-01

    We present a new version of the fully 3D photoionization and dust radiative transfer code, MOCASSIN, that uses a Monte Carlo approach for the transfer of radiation. The X-ray enabled MOCASSIN allows a fully geometry independent description of low-density gaseous environments strongly photoionized by a radiation field extending from radio to gamma rays. The code has been thoroughly benchmarked against other established codes routinely used in the literature, using simple plane parallel models designed to test performance under standard conditions. We show the results of our benchmarking exercise and discuss applicability and limitations of the new code, which should be of guidance for future astrophysical studies with MOCASSIN.

  1. A Unidirectional Split-key Based Signature Protocol with Encrypted Function in Mobile Code Environment

    Institute of Scientific and Technical Information of China (English)

    MIAOFuyou; YANGShoubao; XIONGYan; HUABei; WANGXingfu

    2005-01-01

    In mobile code environment, signing private keys are liable to be exposed; visited hosts are susceptible to be attacked by all kinds of vicious mobile codes, therefore a signer often sends remote nodes mobile codes containing an encrypted signature function to complete a signature. The paper first presents a unidirectional split-key scheme for private key protection based on RSA, which is more simple and secure than secret sharing; and then proposes a split-key based signature protocol with encrypted function, which is traceable, undeniable and malignance resistant. Security analysis shows that the protocol can effectively protect the signing private key and complete secure signatures in mobile code environment.

  2. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  3. Code OK2—A simulation code of ion-beam illumination on an arbitrary shape and structure target

    Science.gov (United States)

    Ogoyski, A. I.; Kawata, S.; Someya, T.

    2004-08-01

    For computer simulations on heavy ion beam (HIB) irradiation on a spherical fuel pellet in heavy ion fusion (HIF) the code OK1 was developed and presented in [Comput. Phys. Commun. 157 (2004) 160-172]. The new code OK2 is a modified upgraded computer program for more common purposes in research fields of medical treatment, material processing as well as HIF. OK2 provides computational capabilities of a three-dimensional ion beam energy deposition on a target with an arbitrary shape and structure. Program summaryTitle of program: OK2 Catalogue identifier: ADTZ Other versions of this program [1] : Title of the program: OK1 Catalogue identifier: ADST Program summary URL:http://cpc.cs.qub.as.uk/summaries/ADTZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC (Pentium 4, ˜1 GHz or more recommended) Operating system: Windows or UNIX Program language used: C++ Memory required to execute with typical data: 2048 MB No. of bits in a word: 32 No. of processors used: 1CPU Has the code been vectorized or parallelized: No No. of bytes in distributed program, including test data: 17 334 No of lines in distributed program, including test date: 1487 Distribution format: tar gzip file Nature of physical problem: In research areas of HIF (Heavy Ion Beam Inertial Fusion) energy [1-4] and medical material sciences [5], ion energy deposition profiles need to be evaluated and calculated precisely. Due to a favorable energy deposition behavior of ions in matter [1-4] it is expected that ion beams would be one of preferable candidates in various fields including HIF and material processing. Especially in HIF for a successful fuel ignition and a sufficient fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [4,6,7]. In order to meet this requirement we need to evaluate the uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB

  4. SITA version 0. A simulation and code testing assistant for TOUGH2 and MARNIE

    Energy Technology Data Exchange (ETDEWEB)

    Seher, Holger; Navarro, Martin

    2016-06-15

    High quality standards have to be met by those numerical codes that are applied in long-term safety assessments for deep geological repositories for radioactive waste. The software environment SITA (''a simulation and code testing assistant for TOUGH2 and MARNIE'') has been developed by GRS in order to perform automated regression testing for the flow and transport simulators TOUGH2 and MARNIE. GRS uses the codes TOUGH2 and MARNIE in order to assess the performance of deep geological repositories for radioactive waste. With SITA, simulation results of TOUGH2 and MARNIE can be compared to analytical solutions and simulations results of other code versions. SITA uses data interfaces to operate with codes whose input and output depends on the code version. The present report is part of a wider GRS programme to assure and improve the quality of TOUGH2 and MARNIE. It addresses users as well as administrators of SITA.

  5. A Literature Review of Code Clone Analysis to Improve Software Maintenance Process

    CERN Document Server

    Morshed, Md Monzur; Ahmed, Salah Uddin

    2012-01-01

    Software systems are getting more complex as the system grows where maintaining such system is a primary concern for the industry. Code clone is one of the factors making software maintenance more difficult. It is a process of replicating code blocks by copy-and-paste that is common in software development. In the beginning stage of the project, developers find it easy and time consuming though it has crucial drawbacks in the long run. There are two types of researchers where some researchers think clones lead to additional changes during maintenance phase, in later stage increase the overall maintenance effort. On the other hand, some researchers think that cloned codes are more stable than non cloned codes. In this study, we discussed Code Clones and different ideas, methods, clone detection tools, related research on code clone, case study.

  6. A mathematical approach to the study of the United States Code

    Science.gov (United States)

    Bommarito, Michael J.; Katz, Daniel M.

    2010-10-01

    The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.

  7. A NEW DESIGN METHOD OF CDMA SPREADING CODES BASED ON MULTI-RATE UNITARY FILTER BANK

    Institute of Scientific and Technical Information of China (English)

    Bi Jianxin; Wang Yingmin; Yi Kechu

    2001-01-01

    It is well-known that the multi-valued CDMA spreading codes can be designed by means of a pair of mirror multi-rate filter banks based on some optimizing criterion. This paper indicates that there exists a theoretical bound in the performance of its circulating correlation property, which is given by an explicit expression. Based on this analysis, a criterion of maximizing entropy is proposed to design such codes. Computer simulation result suggests that the resulted codes outperform the conventional binary balanced Gold codes for an asynchronous CDMA system.

  8. Generating code adapted for interlinking legacy scalar code and extended vector code

    Science.gov (United States)

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  9. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P. (Patrick); Chadwick, M. B. (Mark B.); Dietrich, F. S.; Herman, M.; Kawano, T. (Toshihiko); Konig, A.; Obložinský, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  10. De-randomizing Shannon: The Design and Analysis of a Capacity-Achieving Rateless Code

    CERN Document Server

    Balakrishnan, Hari; Perry, Jonathan; Shah, Devavrat

    2012-01-01

    This paper presents an analysis of spinal codes, a class of rateless codes proposed recently. We prove that spinal codes achieve Shannon capacity for the binary symmetric channel (BSC) and the additive white Gaussian noise (AWGN) channel with an efficient polynomial-time encoder and decoder. They are the first rateless codes with proofs of these properties for BSC and AWGN. The key idea in the spinal code is the sequential application of a hash function over the message bits. The sequential structure of the code turns out to be crucial for efficient decoding. Moreover, counter to the wisdom of having an expander structure in good codes, we show that the spinal code, despite its sequential structure, achieves capacity. The pseudo-randomness provided by a hash function suffices for this purpose. Our proof introduces a variant of Gallager's result characterizing the error exponent of random codes for any memoryless channel. We present a novel application of these error-exponent results within the framework of an...

  11. A MATLAB Code for Three Dimensional Linear Elastostatics using Constant Boundary Elements

    CERN Document Server

    P, Kirana Kumara

    2013-01-01

    Present work presents a code written in the very simple programming language MATLAB, for three dimensional linear elastostatics, using constant boundary elements. The code, in full or in part, is not a translation or a copy of any of the existing codes. Present paper explains how the code is written, and lists all the formulae used. Code is verified by using the code to solve a simple problem which has the well known approximate analytical solution. Of course, present work does not make any contribution to research on boundary elements, in terms of theory. But the work is justified by the fact that, to the best of author's knowledge, as of now, one cannot find an open access MATLAB code for three dimensional linear elastostatics using constant boundary elements. Author hopes this paper to be of help to beginners who wish to understand how a simple but complete boundary element code works, so that they can build upon and modify the present open access code to solve complex engineering problems quickly and easi...

  12. NERO: a code for the nonlinear evaluation of resonances in one-turn mappings

    Science.gov (United States)

    Todesco, E.; Gemmi, M.; Giovannozzi, M.

    1997-10-01

    We describe a code that evaluates the stability, the position and the width of resonances in four-dimensional symplectic mappings. The code is based on the computation of the resonant perturbative series through the program ARES, and on the analysis of the resonant orbits of the interpolating Hamiltonian. The code is dedicated to the study and to the comparison of the nonlinear behaviour in one-turn betatronic maps.

  13. CODE OF ETHICS – TOOL IN THE DEVELOPMENT OF A FAVORABLE CLIMATE ACCOUNTING PROFESSION

    OpenAIRE

    2013-01-01

    Code of Ethics is a formal organization through which it declares its values and principles in social issues and defines its responsibility towards stakeholders and behavior they expect from their employees. The aim is to communicate the code of ethics are standards of the organization, namely to guide present and future behavior and actions in different situations makes them clear objectives, norms and values that support them and who is responsible. National Code of Ethics for Professional ...

  14. SUMMARY OF GENERAL WORKING GROUP A+B+D: CODES BENCHMARKING.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.; SHAPOSHNIKOVA, E.; ZIMMERMANN, F.; HOFMANN, I.

    2006-05-29

    Computer simulation is an indispensable tool in assisting the design, construction, and operation of accelerators. In particular, computer simulation complements analytical theories and experimental observations in understanding beam dynamics in accelerators. The ultimate function of computer simulation is to study mechanisms that limit the performance of frontier accelerators. There are four goals for the benchmarking of computer simulation codes, namely debugging, validation, comparison and verification: (1) Debugging--codes should calculate what they are supposed to calculate; (2) Validation--results generated by the codes should agree with established analytical results for specific cases; (3) Comparison--results from two sets of codes should agree with each other if the models used are the same; and (4) Verification--results from the codes should agree with experimental measurements. This is the summary of the joint session among working groups A, B, and D of the HI32006 Workshop on computer codes benchmarking.

  15. Ideas for Advancing Code Sharing: A Different Kind of Hack Day

    Science.gov (United States)

    Teuben, P.; Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Wallin, J. F.

    2014-05-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and ? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.

  16. Ideas for Advancing Code Sharing (A Different Kind of Hack Day)

    CERN Document Server

    Teuben, Peter; Berriman, Bruce; DuPrie, Kimberly; Hanisch, Robert J; Mink, Jessica; Nemiroff, Robert; Shamir, Lior; Shortridge, Keith; Taylor, Mark; Wallin, John

    2013-01-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and calibration? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.

  17. A New Solution of Distributed Disaster Recovery Based on Raptor Code

    Science.gov (United States)

    Deng, Kai; Wang, Kaiyun; Ma, Danyang

    For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.

  18. A-to-I editing of protein coding and noncoding RNAs.

    Science.gov (United States)

    Mallela, Arka; Nishikura, Kazuko

    2012-01-01

    Adenosine deaminase acting on RNA (ADAR) catalyzes the hydrolytic deamination of adenosine to inosine in double-stranded RNA (dsRNA) substrates. Inosine pairs preferentially with cytidine, as opposed to uridine; therefore, ADAR editing alters the sequence and base pairing properties of both protein-coding and non-coding RNA. Editing can directly alter the sequence of protein-coding transcripts and modify splicing, or affect a variety of non-coding targets, including microRNA, small interfering RNA, viral transcripts, and repeat elements such as Alu and LINE. Such editing has a wide range of physiological effects, including modification of targets in the brain and in disease states.

  19. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, F. N.; Hu, H. S., E-mail: huasi-hu@mail.xjtu.edu.cn; Wang, D. M.; Jia, J. [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Zhang, T. K. [Laser Fusion Research Center, CAEP, Mianyang, 621900 Sichuan (China); Jia, Q. G. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China)

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  20. A Novel Error Correcting System Based on Product Codes for Future Magnetic Recording Channels

    CERN Document Server

    Van, Vo Tam

    2012-01-01

    We propose a novel construction of product codes for high-density magnetic recording based on binary low-density parity check (LDPC) codes and binary image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed to decode the codes in the presence of both AWGN errors and scattered hard errors (SHEs). Simulation results show that at a bit error rate (bER) of approximately 10^-8, our method allows improving the error performance by approximately 1.9dB compared with that of a hard decision decoder of RS codes of the same length and code rate. For the mixed error channel including random noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400 SHEs are randomly generated. The bit error performance of the proposed product code shows a significant improvement over that of equivalent random LDPC codes or serial concatenation of LDPC and RS codes.

  1. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    Science.gov (United States)

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  2. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Science.gov (United States)

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  3. Modane: A Design Support Tool for Numerical Simulation Codes

    Directory of Open Access Journals (Sweden)

    Lelandais Benoît

    2016-07-01

    Full Text Available The continual increasing power of supercomputers allows numerical simulation codes to take into account more complex physical phenomena. Therefore, physicists and mathematicians have to implement complex algorithms using cutting edge technologies and integrate them in large simulators. The CEA-DAM has been studying for several years the contribution of UML/MDE technologies in its simulators development cycle. The Modane application is one of the results of this work.

  4. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  5. A preliminary uncertainty analysis of phenomenological inputs in TEXAS-V code

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. H.; Kim, H. D.; Ahn, K. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    Uncertainty analysis is important step in safety analysis of nuclear power plants. The better estimate for the computer codes is on the increase instead of conservative codes. These efforts aim to get more precise evaluation of safety margins, and aim at determining the rate of change in the prediction of codes with one or more input parameters varies within its range of interest. From this point of view, a severe accident uncertainty analysis system, SAUNA, has been improved for TEXAS-V FCI uncertainty analysis. The main objective of this paper is to present the TEXAS FCI uncertainty analysis results implemented through the SAUNA code

  6. The TOUGH codes - a family of simulation tools for multiphase flowand transport processes in permeable media

    Energy Technology Data Exchange (ETDEWEB)

    Pruess, Karsten

    2003-08-08

    Numerical simulation has become a widely practiced andaccepted technique for studying flow and transport processes in thevadose zone and other subsurface flow systems. This article discusses asuite of codes, developed primarily at Lawrence Berkeley NationalLaboratory (LBNL), with the capability to model multiphase flows withphase change. We summarize history and goals in the development of theTOUGH codes, and present the governing equations for multiphase,multicomponent flow. Special emphasis is given to space discretization bymeans of integral finite differences (IFD). Issues of code implementationand architecture are addressed, as well as code applications,maintenance, and future developments.

  7. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  8. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  9. 46 CFR Appendix A to Part 520 - Standard Terminology and Codes

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Standard Terminology and Codes A Appendix A to Part 520 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS Pt. 520, App. A Appendix A to Part 520—Standard Terminology and Codes...

  10. Code-Switching and Competition: An Examination of a Situational Response

    Science.gov (United States)

    Bernstein, Eve; Herman, Ariela

    2014-01-01

    Code switching is primarily a linguistic term that refers to the use of two or more languages within the same conversation, or same sentence, to convey a single message. One field of linguistics, sociocultural linguistics, is broad and interdisciplinary, a mixture of language, culture, and society. In sociocultural linguistics, the code, or…

  11. A New Phenomenon in Saudi Females’ Code-switching: A Morphemic Analysis

    Directory of Open Access Journals (Sweden)

    Mona O. Turjoman

    2016-12-01

    Full Text Available This sociolinguistics study investigates a new phenomenon that has recently surfaced in the field of code-switching among Saudi females residing in the Western region of Saudi Arabia. This phenomenon basically combines bound Arabic pronouns, tense markers or definite article to English free morphemes or the combination of bound English affixes to Arabic morphemes. Moreover, the study examines the factors that affect this type of code-switching. The results of the study indicate that this phenomenon provides data that invalidates Poplack’s (1980 universality of the ‘Free Morpheme Constraint’. It is also concluded that the main factors that influence this type of code-switching is solidarity and group identity among other factors. Keywords: Code-switching, Saudi females, sociolinguistics, CS factors, morphemic analysis

  12. A Flexible LDPC code decoder with a Network on Chip as underlying interconnect architecture

    CERN Document Server

    Condo, Carlo

    2011-01-01

    LDPC (Low Density Parity Check) codes are among the most powerful and widely adopted modern error correcting codes. The iterative decoding algorithms required for these codes involve high computational complexity and high processing throughput is achieved by allocating a sufficient number of processing elements (PEs). Supporting multiple heterogeneous LDPC codes on a parallel decoder poses serious problems in the design of the interconnect structure for such PEs. The aim of this work is to explore the feasibility of NoC (Network on Chip) based decoders, where full flexibility in terms of supported LDPC codes is obtained resorting to an NoC to connect PEs. NoC based LDPC decoders have been previously considered unfeasible because of the cost overhead associated to packet management and routing. On the contrary, the designed NoC adopts a low complexity routing, which introduces a very limited cost overhead with respect to architectures dedicated to specific classes of codes. Moreover the paper proposes an effic...

  13. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  14. Speeding-up MADYMO 3D on serial and parallel computers using a portable coding environment

    NARCIS (Netherlands)

    Tsiandikos, T.; Rooijackers, H.F.L.; Asperen, F.G.J. van; Lupker, H.A.

    1996-01-01

    This paper outlines the strategy and methodology used to create a portable coding environment for the commercial package MADYMO. The objective is to design a global data structure that efficiently utilises the memory and cache of computers, so that one source code can be used for serial, vector and

  15. A new combinatorial approach to the construction of constant composition codes

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Constant composition codes(CCCs)are a new generalization of binary constant weight codes and have attracted recent interest due to their numerous applications. In this paper, a new combinatorial approach to the construction of CCCs is proposed, and used to establish new optimal CCCs.

  16. Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme

    OpenAIRE

    Golin, Mordecai; Mathieu, Claire; Young, Neal E.

    2002-01-01

    We give a polynomial-time approximation scheme for the generalization of Huffman Coding in which codeword letters have non-uniform costs (as in Morse code, where the dash is twice as long as the dot). The algorithm computes a (1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is the input size.

  17. WYSIWYB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    DEFF Research Database (Denmark)

    Lawall, Julia; Palix, Nicolas; Hansen, Rene Rydhof;

    2009-01-01

    Although a number of approaches to finding bugs in systems code have been proposed, bugs still remain to be found. Current approaches have emphasized scalability more than usability, and as a result it is difficult to relate the results to particular patterns found in the source code and to contr...

  18. WYSIWIB: A Declarative Approach to Finding Protocols and Bugs in Linux Code

    DEFF Research Database (Denmark)

    Lawall, Julia Laetitia; Brunel, Julien Pierre Manuel; Hansen, Rene Rydhof;

    2008-01-01

    Although a number of approaches to finding bugs in systems code have been proposed, bugs still remain to be found. Current approaches have emphasized scalability more than usability, and as a result it is difficult to relate the results to particular patterns found in the source code and to contr...

  19. WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    DEFF Research Database (Denmark)

    Lawall, Julia; Brunel, Julien Pierre Manuel; Palix, Nicolas Jean-Michel;

    2009-01-01

    the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses...

  20. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.